2025-07-05 22:11:16.067894 | Job console starting 2025-07-05 22:11:16.084566 | Updating git repos 2025-07-05 22:11:16.149271 | Cloning repos into workspace 2025-07-05 22:11:16.384196 | Restoring repo states 2025-07-05 22:11:16.414072 | Merging changes 2025-07-05 22:11:16.414094 | Checking out repos 2025-07-05 22:11:16.847544 | Preparing playbooks 2025-07-05 22:11:17.504416 | Running Ansible setup 2025-07-05 22:11:22.010683 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-05 22:11:22.760101 | 2025-07-05 22:11:22.760302 | PLAY [Base pre] 2025-07-05 22:11:22.777538 | 2025-07-05 22:11:22.777682 | TASK [Setup log path fact] 2025-07-05 22:11:22.798803 | orchestrator | ok 2025-07-05 22:11:22.816069 | 2025-07-05 22:11:22.816262 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-05 22:11:22.846219 | orchestrator | ok 2025-07-05 22:11:22.858686 | 2025-07-05 22:11:22.858799 | TASK [emit-job-header : Print job information] 2025-07-05 22:11:22.910009 | # Job Information 2025-07-05 22:11:22.910227 | Ansible Version: 2.16.14 2025-07-05 22:11:22.910265 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-07-05 22:11:22.910298 | Pipeline: post 2025-07-05 22:11:22.910321 | Executor: 521e9411259a 2025-07-05 22:11:22.910342 | Triggered by: https://github.com/osism/testbed/commit/724f8b57e227a930653d29a5b0bf5f7ea406e2ee 2025-07-05 22:11:22.910364 | Event ID: f32e3bba-59ec-11f0-840a-ca9c09b3565c 2025-07-05 22:11:22.917483 | 2025-07-05 22:11:22.917598 | LOOP [emit-job-header : Print node information] 2025-07-05 22:11:23.062520 | orchestrator | ok: 2025-07-05 22:11:23.062791 | orchestrator | # Node Information 2025-07-05 22:11:23.062857 | orchestrator | Inventory Hostname: orchestrator 2025-07-05 22:11:23.062885 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-05 22:11:23.062908 | orchestrator | Username: zuul-testbed04 2025-07-05 22:11:23.062928 | orchestrator | Distro: Debian 12.11 2025-07-05 22:11:23.062952 | orchestrator | Provider: static-testbed 2025-07-05 22:11:23.062973 | orchestrator | Region: 2025-07-05 22:11:23.062996 | orchestrator | Label: testbed-orchestrator 2025-07-05 22:11:23.063018 | orchestrator | Product Name: OpenStack Nova 2025-07-05 22:11:23.063038 | orchestrator | Interface IP: 81.163.193.140 2025-07-05 22:11:23.085237 | 2025-07-05 22:11:23.086723 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-05 22:11:23.677221 | orchestrator -> localhost | changed 2025-07-05 22:11:23.698018 | 2025-07-05 22:11:23.698201 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-05 22:11:24.803482 | orchestrator -> localhost | changed 2025-07-05 22:11:24.818984 | 2025-07-05 22:11:24.819119 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-05 22:11:25.129451 | orchestrator -> localhost | ok 2025-07-05 22:11:25.137760 | 2025-07-05 22:11:25.137910 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-05 22:11:25.167913 | orchestrator | ok 2025-07-05 22:11:25.184415 | orchestrator | included: /var/lib/zuul/builds/66245ee68aa34704a6dbdb72dcafc991/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-05 22:11:25.192509 | 2025-07-05 22:11:25.192610 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-05 22:11:26.195486 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-05 22:11:26.195766 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/66245ee68aa34704a6dbdb72dcafc991/work/66245ee68aa34704a6dbdb72dcafc991_id_rsa 2025-07-05 22:11:26.195808 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/66245ee68aa34704a6dbdb72dcafc991/work/66245ee68aa34704a6dbdb72dcafc991_id_rsa.pub 2025-07-05 22:11:26.195836 | orchestrator -> localhost | The key fingerprint is: 2025-07-05 22:11:26.195864 | orchestrator -> localhost | SHA256:57XB4xkaoA+yp/WUu0+1JlvzMua5DsQgihnCI3TiL5E zuul-build-sshkey 2025-07-05 22:11:26.195886 | orchestrator -> localhost | The key's randomart image is: 2025-07-05 22:11:26.195921 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-05 22:11:26.195943 | orchestrator -> localhost | | o . | 2025-07-05 22:11:26.195965 | orchestrator -> localhost | |+ + | 2025-07-05 22:11:26.195986 | orchestrator -> localhost | |oE. . o | 2025-07-05 22:11:26.196005 | orchestrator -> localhost | |..++ . o + . | 2025-07-05 22:11:26.196025 | orchestrator -> localhost | | .o.o o S = B | 2025-07-05 22:11:26.196048 | orchestrator -> localhost | | . o o = * B | 2025-07-05 22:11:26.196068 | orchestrator -> localhost | | . o + * O | 2025-07-05 22:11:26.196087 | orchestrator -> localhost | | + o o *++ | 2025-07-05 22:11:26.196107 | orchestrator -> localhost | | . +oo+=+. | 2025-07-05 22:11:26.196128 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-05 22:11:26.196233 | orchestrator -> localhost | ok: Runtime: 0:00:00.476587 2025-07-05 22:11:26.203855 | 2025-07-05 22:11:26.203963 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-05 22:11:26.240837 | orchestrator | ok 2025-07-05 22:11:26.253957 | orchestrator | included: /var/lib/zuul/builds/66245ee68aa34704a6dbdb72dcafc991/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-05 22:11:26.263040 | 2025-07-05 22:11:26.263138 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-05 22:11:26.287470 | orchestrator | skipping: Conditional result was False 2025-07-05 22:11:26.295927 | 2025-07-05 22:11:26.296032 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-05 22:11:27.014884 | orchestrator | changed 2025-07-05 22:11:27.024345 | 2025-07-05 22:11:27.024484 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-05 22:11:27.308208 | orchestrator | ok 2025-07-05 22:11:27.317668 | 2025-07-05 22:11:27.317799 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-05 22:11:27.805393 | orchestrator | ok 2025-07-05 22:11:27.814338 | 2025-07-05 22:11:27.814474 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-05 22:11:28.253309 | orchestrator | ok 2025-07-05 22:11:28.261278 | 2025-07-05 22:11:28.261414 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-05 22:11:28.286747 | orchestrator | skipping: Conditional result was False 2025-07-05 22:11:28.297262 | 2025-07-05 22:11:28.297395 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-05 22:11:28.808401 | orchestrator -> localhost | changed 2025-07-05 22:11:28.832478 | 2025-07-05 22:11:28.832653 | TASK [add-build-sshkey : Add back temp key] 2025-07-05 22:11:29.190714 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/66245ee68aa34704a6dbdb72dcafc991/work/66245ee68aa34704a6dbdb72dcafc991_id_rsa (zuul-build-sshkey) 2025-07-05 22:11:29.191011 | orchestrator -> localhost | ok: Runtime: 0:00:00.023061 2025-07-05 22:11:29.198455 | 2025-07-05 22:11:29.198576 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-05 22:11:29.663985 | orchestrator | ok 2025-07-05 22:11:29.675510 | 2025-07-05 22:11:29.675694 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-05 22:11:29.701144 | orchestrator | skipping: Conditional result was False 2025-07-05 22:11:29.755152 | 2025-07-05 22:11:29.755313 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-05 22:11:30.189367 | orchestrator | ok 2025-07-05 22:11:30.204402 | 2025-07-05 22:11:30.204539 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-05 22:11:30.250696 | orchestrator | ok 2025-07-05 22:11:30.267314 | 2025-07-05 22:11:30.267646 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-05 22:11:30.565732 | orchestrator -> localhost | ok 2025-07-05 22:11:30.573608 | 2025-07-05 22:11:30.573726 | TASK [validate-host : Collect information about the host] 2025-07-05 22:11:31.808198 | orchestrator | ok 2025-07-05 22:11:31.826220 | 2025-07-05 22:11:31.826362 | TASK [validate-host : Sanitize hostname] 2025-07-05 22:11:31.891724 | orchestrator | ok 2025-07-05 22:11:31.900502 | 2025-07-05 22:11:31.900639 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-05 22:11:32.472182 | orchestrator -> localhost | changed 2025-07-05 22:11:32.478752 | 2025-07-05 22:11:32.478935 | TASK [validate-host : Collect information about zuul worker] 2025-07-05 22:11:32.952460 | orchestrator | ok 2025-07-05 22:11:32.957912 | 2025-07-05 22:11:32.958025 | TASK [validate-host : Write out all zuul information for each host] 2025-07-05 22:11:33.540312 | orchestrator -> localhost | changed 2025-07-05 22:11:33.551254 | 2025-07-05 22:11:33.551374 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-05 22:11:33.944825 | orchestrator | ok 2025-07-05 22:11:33.954419 | 2025-07-05 22:11:33.954564 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-05 22:12:06.845599 | orchestrator | changed: 2025-07-05 22:12:06.845897 | orchestrator | .d..t...... src/ 2025-07-05 22:12:06.845954 | orchestrator | .d..t...... src/github.com/ 2025-07-05 22:12:06.845996 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-05 22:12:06.846033 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-05 22:12:06.846067 | orchestrator | RedHat.yml 2025-07-05 22:12:06.864113 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-05 22:12:06.864180 | orchestrator | RedHat.yml 2025-07-05 22:12:06.864264 | orchestrator | = 2.2.0"... 2025-07-05 22:12:22.685853 | orchestrator | 22:12:22.685 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-07-05 22:12:22.713754 | orchestrator | 22:12:22.713 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-07-05 22:12:23.641521 | orchestrator | 22:12:23.641 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-05 22:12:24.680636 | orchestrator | 22:12:24.680 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-05 22:12:25.467088 | orchestrator | 22:12:25.466 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-05 22:12:26.472058 | orchestrator | 22:12:26.471 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-05 22:12:27.327324 | orchestrator | 22:12:27.326 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.2.0... 2025-07-05 22:12:28.468350 | orchestrator | 22:12:28.467 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.2.0 (signed, key ID 4F80527A391BEFD2) 2025-07-05 22:12:28.468430 | orchestrator | 22:12:28.467 STDOUT terraform: Providers are signed by their developers. 2025-07-05 22:12:28.468442 | orchestrator | 22:12:28.467 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-05 22:12:28.468450 | orchestrator | 22:12:28.467 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-05 22:12:28.468455 | orchestrator | 22:12:28.467 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-05 22:12:28.468462 | orchestrator | 22:12:28.467 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-05 22:12:28.468469 | orchestrator | 22:12:28.467 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-05 22:12:28.468474 | orchestrator | 22:12:28.467 STDOUT terraform: you run "tofu init" in the future. 2025-07-05 22:12:28.468478 | orchestrator | 22:12:28.467 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-05 22:12:28.468482 | orchestrator | 22:12:28.467 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-05 22:12:28.468486 | orchestrator | 22:12:28.468 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-05 22:12:28.468490 | orchestrator | 22:12:28.468 STDOUT terraform: should now work. 2025-07-05 22:12:28.468494 | orchestrator | 22:12:28.468 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-05 22:12:28.468497 | orchestrator | 22:12:28.468 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-05 22:12:28.468502 | orchestrator | 22:12:28.468 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-05 22:12:28.600159 | orchestrator | 22:12:28.599 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-07-05 22:12:28.600287 | orchestrator | 22:12:28.600 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-05 22:12:28.807453 | orchestrator | 22:12:28.806 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-05 22:12:28.807643 | orchestrator | 22:12:28.806 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-05 22:12:28.807663 | orchestrator | 22:12:28.807 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-05 22:12:28.807675 | orchestrator | 22:12:28.807 STDOUT terraform: for this configuration. 2025-07-05 22:12:28.961345 | orchestrator | 22:12:28.961 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-07-05 22:12:28.961462 | orchestrator | 22:12:28.961 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-05 22:12:29.091787 | orchestrator | 22:12:29.091 STDOUT terraform: ci.auto.tfvars 2025-07-05 22:12:29.096723 | orchestrator | 22:12:29.096 STDOUT terraform: default_custom.tf 2025-07-05 22:12:29.254813 | orchestrator | 22:12:29.254 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-07-05 22:12:30.232908 | orchestrator | 22:12:30.232 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-05 22:12:30.744692 | orchestrator | 22:12:30.744 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-05 22:12:30.952919 | orchestrator | 22:12:30.949 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-05 22:12:30.952989 | orchestrator | 22:12:30.950 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-05 22:12:30.952996 | orchestrator | 22:12:30.950 STDOUT terraform:  + create 2025-07-05 22:12:30.953001 | orchestrator | 22:12:30.950 STDOUT terraform:  <= read (data resources) 2025-07-05 22:12:30.953006 | orchestrator | 22:12:30.950 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-05 22:12:30.953010 | orchestrator | 22:12:30.950 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-05 22:12:30.953014 | orchestrator | 22:12:30.950 STDOUT terraform:  # (config refers to values not yet known) 2025-07-05 22:12:30.953019 | orchestrator | 22:12:30.950 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-05 22:12:30.953022 | orchestrator | 22:12:30.950 STDOUT terraform:  + checksum = (known after apply) 2025-07-05 22:12:30.953026 | orchestrator | 22:12:30.950 STDOUT terraform:  + created_at = (known after apply) 2025-07-05 22:12:30.953030 | orchestrator | 22:12:30.950 STDOUT terraform:  + file = (known after apply) 2025-07-05 22:12:30.953034 | orchestrator | 22:12:30.950 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.953038 | orchestrator | 22:12:30.950 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.953054 | orchestrator | 22:12:30.950 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-05 22:12:30.953058 | orchestrator | 22:12:30.950 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-05 22:12:30.953062 | orchestrator | 22:12:30.950 STDOUT terraform:  + most_recent = true 2025-07-05 22:12:30.953066 | orchestrator | 22:12:30.950 STDOUT terraform:  + name = (known after apply) 2025-07-05 22:12:30.953070 | orchestrator | 22:12:30.950 STDOUT terraform:  + protected = (known after apply) 2025-07-05 22:12:30.953073 | orchestrator | 22:12:30.950 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.953077 | orchestrator | 22:12:30.950 STDOUT terraform:  + schema = (known after apply) 2025-07-05 22:12:30.953081 | orchestrator | 22:12:30.950 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-05 22:12:30.953085 | orchestrator | 22:12:30.950 STDOUT terraform:  + tags = (known after apply) 2025-07-05 22:12:30.953088 | orchestrator | 22:12:30.950 STDOUT terraform:  + updated_at = (known after apply) 2025-07-05 22:12:30.953092 | orchestrator | 22:12:30.950 STDOUT terraform:  } 2025-07-05 22:12:30.953098 | orchestrator | 22:12:30.950 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-05 22:12:30.953102 | orchestrator | 22:12:30.950 STDOUT terraform:  # (config refers to values not yet known) 2025-07-05 22:12:30.953106 | orchestrator | 22:12:30.950 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-05 22:12:30.953109 | orchestrator | 22:12:30.950 STDOUT terraform:  + checksum = (known after apply) 2025-07-05 22:12:30.953113 | orchestrator | 22:12:30.950 STDOUT terraform:  + created_at = (known after apply) 2025-07-05 22:12:30.953117 | orchestrator | 22:12:30.951 STDOUT terraform:  + file = (known after apply) 2025-07-05 22:12:30.953121 | orchestrator | 22:12:30.951 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.953124 | orchestrator | 22:12:30.951 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.953128 | orchestrator | 22:12:30.951 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-05 22:12:30.953131 | orchestrator | 22:12:30.951 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-05 22:12:30.953141 | orchestrator | 22:12:30.951 STDOUT terraform:  + most_recent = true 2025-07-05 22:12:30.953145 | orchestrator | 22:12:30.951 STDOUT terraform:  + name = (known after apply) 2025-07-05 22:12:30.953149 | orchestrator | 22:12:30.951 STDOUT terraform:  + protected = (known after apply) 2025-07-05 22:12:30.953153 | orchestrator | 22:12:30.951 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.953170 | orchestrator | 22:12:30.951 STDOUT terraform:  + schema = (known after apply) 2025-07-05 22:12:30.953174 | orchestrator | 22:12:30.951 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-05 22:12:30.953178 | orchestrator | 22:12:30.951 STDOUT terraform:  + tags = (known after apply) 2025-07-05 22:12:30.953181 | orchestrator | 22:12:30.951 STDOUT terraform:  + updated_at = (known after apply) 2025-07-05 22:12:30.953185 | orchestrator | 22:12:30.951 STDOUT terraform:  } 2025-07-05 22:12:30.953189 | orchestrator | 22:12:30.951 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-05 22:12:30.953196 | orchestrator | 22:12:30.951 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-05 22:12:30.953200 | orchestrator | 22:12:30.951 STDOUT terraform:  + content = (known after apply) 2025-07-05 22:12:30.953204 | orchestrator | 22:12:30.951 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-05 22:12:30.953208 | orchestrator | 22:12:30.951 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-05 22:12:30.953211 | orchestrator | 22:12:30.951 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-05 22:12:30.953215 | orchestrator | 22:12:30.951 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-05 22:12:30.953219 | orchestrator | 22:12:30.951 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-05 22:12:30.953223 | orchestrator | 22:12:30.951 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-05 22:12:30.953227 | orchestrator | 22:12:30.951 STDOUT terraform:  + directory_permission = "0777" 2025-07-05 22:12:30.953231 | orchestrator | 22:12:30.951 STDOUT terraform:  + file_permission = "0644" 2025-07-05 22:12:30.953234 | orchestrator | 22:12:30.951 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-05 22:12:30.953238 | orchestrator | 22:12:30.951 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.953242 | orchestrator | 22:12:30.951 STDOUT terraform:  } 2025-07-05 22:12:30.953246 | orchestrator | 22:12:30.951 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-05 22:12:30.953250 | orchestrator | 22:12:30.952 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-05 22:12:30.953253 | orchestrator | 22:12:30.952 STDOUT terraform:  + content = (known after apply) 2025-07-05 22:12:30.953257 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-05 22:12:30.953261 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-05 22:12:30.953265 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-05 22:12:30.953268 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-05 22:12:30.953272 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-05 22:12:30.953276 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-05 22:12:30.953279 | orchestrator | 22:12:30.952 STDOUT terraform:  + directory_permission = "0777" 2025-07-05 22:12:30.953283 | orchestrator | 22:12:30.952 STDOUT terraform:  + file_permission = "0644" 2025-07-05 22:12:30.953287 | orchestrator | 22:12:30.952 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-05 22:12:30.953291 | orchestrator | 22:12:30.952 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.953295 | orchestrator | 22:12:30.952 STDOUT terraform:  } 2025-07-05 22:12:30.953301 | orchestrator | 22:12:30.952 STDOUT terraform:  # local_file.inventory will be created 2025-07-05 22:12:30.953305 | orchestrator | 22:12:30.952 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-05 22:12:30.953309 | orchestrator | 22:12:30.952 STDOUT terraform:  + content = (known after apply) 2025-07-05 22:12:30.953316 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-05 22:12:30.953319 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-05 22:12:30.953326 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-05 22:12:30.953330 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-05 22:12:30.953333 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-05 22:12:30.953337 | orchestrator | 22:12:30.952 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-05 22:12:30.953341 | orchestrator | 22:12:30.952 STDOUT terraform:  + directory_permission = "0777" 2025-07-05 22:12:30.953345 | orchestrator | 22:12:30.952 STDOUT terraform:  + file_permission = "0644" 2025-07-05 22:12:30.954056 | orchestrator | 22:12:30.952 STDOUT terraform:  + filename = "inventory.ci" 2025-07-05 22:12:30.954150 | orchestrator | 22:12:30.953 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.954167 | orchestrator | 22:12:30.953 STDOUT terraform:  } 2025-07-05 22:12:30.954179 | orchestrator | 22:12:30.953 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-05 22:12:30.954193 | orchestrator | 22:12:30.953 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-05 22:12:30.954207 | orchestrator | 22:12:30.953 STDOUT terraform:  + content = (sensitive value) 2025-07-05 22:12:30.954219 | orchestrator | 22:12:30.953 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-05 22:12:30.954230 | orchestrator | 22:12:30.953 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-05 22:12:30.954241 | orchestrator | 22:12:30.953 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-05 22:12:30.954251 | orchestrator | 22:12:30.953 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-05 22:12:30.954262 | orchestrator | 22:12:30.953 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-05 22:12:30.954273 | orchestrator | 22:12:30.953 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-05 22:12:30.954364 | orchestrator | 22:12:30.953 STDOUT terraform:  + directory_permission = "0700" 2025-07-05 22:12:30.954377 | orchestrator | 22:12:30.953 STDOUT terraform:  + file_permission = "0600" 2025-07-05 22:12:30.954416 | orchestrator | 22:12:30.953 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-05 22:12:30.954428 | orchestrator | 22:12:30.953 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.954438 | orchestrator | 22:12:30.953 STDOUT terraform:  } 2025-07-05 22:12:30.954449 | orchestrator | 22:12:30.953 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-05 22:12:30.954460 | orchestrator | 22:12:30.953 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-05 22:12:30.954471 | orchestrator | 22:12:30.953 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.954482 | orchestrator | 22:12:30.953 STDOUT terraform:  } 2025-07-05 22:12:30.954507 | orchestrator | 22:12:30.953 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-05 22:12:30.954553 | orchestrator | 22:12:30.953 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-05 22:12:30.954565 | orchestrator | 22:12:30.954 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.954577 | orchestrator | 22:12:30.954 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.954588 | orchestrator | 22:12:30.954 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.954598 | orchestrator | 22:12:30.954 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.954609 | orchestrator | 22:12:30.954 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.954620 | orchestrator | 22:12:30.954 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-05 22:12:30.954630 | orchestrator | 22:12:30.954 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.954641 | orchestrator | 22:12:30.954 STDOUT terraform:  + size = 80 2025-07-05 22:12:30.954652 | orchestrator | 22:12:30.954 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.954662 | orchestrator | 22:12:30.954 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.954673 | orchestrator | 22:12:30.954 STDOUT terraform:  } 2025-07-05 22:12:30.954684 | orchestrator | 22:12:30.954 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-05 22:12:30.954695 | orchestrator | 22:12:30.954 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-05 22:12:30.954705 | orchestrator | 22:12:30.954 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.954721 | orchestrator | 22:12:30.954 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.954732 | orchestrator | 22:12:30.954 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.954743 | orchestrator | 22:12:30.954 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.954753 | orchestrator | 22:12:30.954 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.954764 | orchestrator | 22:12:30.954 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-05 22:12:30.954775 | orchestrator | 22:12:30.954 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.954786 | orchestrator | 22:12:30.954 STDOUT terraform:  + size = 80 2025-07-05 22:12:30.954796 | orchestrator | 22:12:30.954 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.954811 | orchestrator | 22:12:30.954 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.954821 | orchestrator | 22:12:30.954 STDOUT terraform:  } 2025-07-05 22:12:30.954833 | orchestrator | 22:12:30.954 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-05 22:12:30.954847 | orchestrator | 22:12:30.954 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-05 22:12:30.954862 | orchestrator | 22:12:30.954 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.954885 | orchestrator | 22:12:30.954 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.954935 | orchestrator | 22:12:30.954 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.954951 | orchestrator | 22:12:30.954 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.955006 | orchestrator | 22:12:30.954 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.955023 | orchestrator | 22:12:30.954 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-05 22:12:30.955064 | orchestrator | 22:12:30.955 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.955080 | orchestrator | 22:12:30.955 STDOUT terraform:  + size = 80 2025-07-05 22:12:30.955095 | orchestrator | 22:12:30.955 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.955127 | orchestrator | 22:12:30.955 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.955143 | orchestrator | 22:12:30.955 STDOUT terraform:  } 2025-07-05 22:12:30.955157 | orchestrator | 22:12:30.955 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-05 22:12:30.955205 | orchestrator | 22:12:30.955 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-05 22:12:30.955265 | orchestrator | 22:12:30.955 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.955284 | orchestrator | 22:12:30.955 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.955300 | orchestrator | 22:12:30.955 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.955314 | orchestrator | 22:12:30.955 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.955356 | orchestrator | 22:12:30.955 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.955416 | orchestrator | 22:12:30.955 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-05 22:12:30.955453 | orchestrator | 22:12:30.955 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.955465 | orchestrator | 22:12:30.955 STDOUT terraform:  + size = 80 2025-07-05 22:12:30.955479 | orchestrator | 22:12:30.955 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.955493 | orchestrator | 22:12:30.955 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.955504 | orchestrator | 22:12:30.955 STDOUT terraform:  } 2025-07-05 22:12:30.955546 | orchestrator | 22:12:30.955 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-05 22:12:30.955594 | orchestrator | 22:12:30.955 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-05 22:12:30.955632 | orchestrator | 22:12:30.955 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.955648 | orchestrator | 22:12:30.955 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.955681 | orchestrator | 22:12:30.955 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.955722 | orchestrator | 22:12:30.955 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.955738 | orchestrator | 22:12:30.955 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.955790 | orchestrator | 22:12:30.955 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-05 22:12:30.955818 | orchestrator | 22:12:30.955 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.955834 | orchestrator | 22:12:30.955 STDOUT terraform:  + size = 80 2025-07-05 22:12:30.955848 | orchestrator | 22:12:30.955 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.955884 | orchestrator | 22:12:30.955 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.955897 | orchestrator | 22:12:30.955 STDOUT terraform:  } 2025-07-05 22:12:30.955973 | orchestrator | 22:12:30.955 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-05 22:12:30.955993 | orchestrator | 22:12:30.955 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-05 22:12:30.956008 | orchestrator | 22:12:30.955 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.956023 | orchestrator | 22:12:30.955 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.956063 | orchestrator | 22:12:30.956 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.956080 | orchestrator | 22:12:30.956 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.956123 | orchestrator | 22:12:30.956 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.956166 | orchestrator | 22:12:30.956 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-05 22:12:30.956203 | orchestrator | 22:12:30.956 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.956219 | orchestrator | 22:12:30.956 STDOUT terraform:  + size = 80 2025-07-05 22:12:30.956235 | orchestrator | 22:12:30.956 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.956275 | orchestrator | 22:12:30.956 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.956297 | orchestrator | 22:12:30.956 STDOUT terraform:  } 2025-07-05 22:12:30.956412 | orchestrator | 22:12:30.956 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-05 22:12:30.956444 | orchestrator | 22:12:30.956 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-05 22:12:30.956468 | orchestrator | 22:12:30.956 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.956486 | orchestrator | 22:12:30.956 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.956500 | orchestrator | 22:12:30.956 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.956545 | orchestrator | 22:12:30.956 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.956583 | orchestrator | 22:12:30.956 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.956623 | orchestrator | 22:12:30.956 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-05 22:12:30.956649 | orchestrator | 22:12:30.956 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.956665 | orchestrator | 22:12:30.956 STDOUT terraform:  + size = 80 2025-07-05 22:12:30.956689 | orchestrator | 22:12:30.956 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.956704 | orchestrator | 22:12:30.956 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.956718 | orchestrator | 22:12:30.956 STDOUT terraform:  } 2025-07-05 22:12:30.956765 | orchestrator | 22:12:30.956 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-05 22:12:30.956808 | orchestrator | 22:12:30.956 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-05 22:12:30.956852 | orchestrator | 22:12:30.956 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.956868 | orchestrator | 22:12:30.956 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.956896 | orchestrator | 22:12:30.956 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.956937 | orchestrator | 22:12:30.956 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.956973 | orchestrator | 22:12:30.956 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-05 22:12:30.957009 | orchestrator | 22:12:30.956 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.957025 | orchestrator | 22:12:30.956 STDOUT terraform:  + size = 20 2025-07-05 22:12:30.957040 | orchestrator | 22:12:30.957 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.957054 | orchestrator | 22:12:30.957 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.957090 | orchestrator | 22:12:30.957 STDOUT terraform:  } 2025-07-05 22:12:30.957105 | orchestrator | 22:12:30.957 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-05 22:12:30.957157 | orchestrator | 22:12:30.957 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-05 22:12:30.957176 | orchestrator | 22:12:30.957 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.957206 | orchestrator | 22:12:30.957 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.957240 | orchestrator | 22:12:30.957 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.957281 | orchestrator | 22:12:30.957 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.957322 | orchestrator | 22:12:30.957 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-05 22:12:30.957338 | orchestrator | 22:12:30.957 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.957354 | orchestrator | 22:12:30.957 STDOUT terraform:  + size = 20 2025-07-05 22:12:30.957369 | orchestrator | 22:12:30.957 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.957422 | orchestrator | 22:12:30.957 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.957436 | orchestrator | 22:12:30.957 STDOUT terraform:  } 2025-07-05 22:12:30.957494 | orchestrator | 22:12:30.957 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-05 22:12:30.957549 | orchestrator | 22:12:30.957 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-05 22:12:30.957605 | orchestrator | 22:12:30.957 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.957670 | orchestrator | 22:12:30.957 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.957683 | orchestrator | 22:12:30.957 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.957694 | orchestrator | 22:12:30.957 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.957708 | orchestrator | 22:12:30.957 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-05 22:12:30.957742 | orchestrator | 22:12:30.957 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.957758 | orchestrator | 22:12:30.957 STDOUT terraform:  + size = 20 2025-07-05 22:12:30.957794 | orchestrator | 22:12:30.957 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.957810 | orchestrator | 22:12:30.957 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.957822 | orchestrator | 22:12:30.957 STDOUT terraform:  } 2025-07-05 22:12:30.957859 | orchestrator | 22:12:30.957 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-05 22:12:30.957901 | orchestrator | 22:12:30.957 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-05 22:12:30.957937 | orchestrator | 22:12:30.957 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.957954 | orchestrator | 22:12:30.957 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.957994 | orchestrator | 22:12:30.957 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.958056 | orchestrator | 22:12:30.957 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.958089 | orchestrator | 22:12:30.958 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-05 22:12:30.958128 | orchestrator | 22:12:30.958 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.958143 | orchestrator | 22:12:30.958 STDOUT terraform:  + size = 20 2025-07-05 22:12:30.958158 | orchestrator | 22:12:30.958 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.958190 | orchestrator | 22:12:30.958 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.958205 | orchestrator | 22:12:30.958 STDOUT terraform:  } 2025-07-05 22:12:30.958243 | orchestrator | 22:12:30.958 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-05 22:12:30.958285 | orchestrator | 22:12:30.958 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-05 22:12:30.958319 | orchestrator | 22:12:30.958 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.958335 | orchestrator | 22:12:30.958 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.958376 | orchestrator | 22:12:30.958 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.958430 | orchestrator | 22:12:30.958 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.958499 | orchestrator | 22:12:30.958 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-05 22:12:30.958516 | orchestrator | 22:12:30.958 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.958537 | orchestrator | 22:12:30.958 STDOUT terraform:  + size = 20 2025-07-05 22:12:30.958551 | orchestrator | 22:12:30.958 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.958563 | orchestrator | 22:12:30.958 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.958573 | orchestrator | 22:12:30.958 STDOUT terraform:  } 2025-07-05 22:12:30.958601 | orchestrator | 22:12:30.958 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-05 22:12:30.958647 | orchestrator | 22:12:30.958 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-05 22:12:30.958662 | orchestrator | 22:12:30.958 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.958692 | orchestrator | 22:12:30.958 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.958729 | orchestrator | 22:12:30.958 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.958766 | orchestrator | 22:12:30.958 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.958804 | orchestrator | 22:12:30.958 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-05 22:12:30.958840 | orchestrator | 22:12:30.958 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.958861 | orchestrator | 22:12:30.958 STDOUT terraform:  + size = 20 2025-07-05 22:12:30.958876 | orchestrator | 22:12:30.958 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.958890 | orchestrator | 22:12:30.958 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.958904 | orchestrator | 22:12:30.958 STDOUT terraform:  } 2025-07-05 22:12:30.958948 | orchestrator | 22:12:30.958 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-05 22:12:30.958991 | orchestrator | 22:12:30.958 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-05 22:12:30.959027 | orchestrator | 22:12:30.958 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.959042 | orchestrator | 22:12:30.959 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.959082 | orchestrator | 22:12:30.959 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.959118 | orchestrator | 22:12:30.959 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.959157 | orchestrator | 22:12:30.959 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-05 22:12:30.959194 | orchestrator | 22:12:30.959 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.959210 | orchestrator | 22:12:30.959 STDOUT terraform:  + size = 20 2025-07-05 22:12:30.959225 | orchestrator | 22:12:30.959 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.959257 | orchestrator | 22:12:30.959 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.959273 | orchestrator | 22:12:30.959 STDOUT terraform:  } 2025-07-05 22:12:30.959311 | orchestrator | 22:12:30.959 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-05 22:12:30.959351 | orchestrator | 22:12:30.959 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-05 22:12:30.959376 | orchestrator | 22:12:30.959 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.959464 | orchestrator | 22:12:30.959 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.959478 | orchestrator | 22:12:30.959 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.959492 | orchestrator | 22:12:30.959 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.959506 | orchestrator | 22:12:30.959 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-05 22:12:30.959540 | orchestrator | 22:12:30.959 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.959555 | orchestrator | 22:12:30.959 STDOUT terraform:  + size = 20 2025-07-05 22:12:30.959570 | orchestrator | 22:12:30.959 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.959601 | orchestrator | 22:12:30.959 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.959614 | orchestrator | 22:12:30.959 STDOUT terraform:  } 2025-07-05 22:12:30.959653 | orchestrator | 22:12:30.959 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-05 22:12:30.959695 | orchestrator | 22:12:30.959 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-05 22:12:30.959730 | orchestrator | 22:12:30.959 STDOUT terraform:  + attachment = (known after apply) 2025-07-05 22:12:30.959744 | orchestrator | 22:12:30.959 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.959787 | orchestrator | 22:12:30.959 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.959821 | orchestrator | 22:12:30.959 STDOUT terraform:  + metadata = (known after apply) 2025-07-05 22:12:30.959857 | orchestrator | 22:12:30.959 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-05 22:12:30.959893 | orchestrator | 22:12:30.959 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.959907 | orchestrator | 22:12:30.959 STDOUT terraform:  + size = 20 2025-07-05 22:12:30.959932 | orchestrator | 22:12:30.959 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-05 22:12:30.959959 | orchestrator | 22:12:30.959 STDOUT terraform:  + volume_type = "ssd" 2025-07-05 22:12:30.959972 | orchestrator | 22:12:30.959 STDOUT terraform:  } 2025-07-05 22:12:30.960017 | orchestrator | 22:12:30.959 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-05 22:12:30.960053 | orchestrator | 22:12:30.960 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-05 22:12:30.960087 | orchestrator | 22:12:30.960 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-05 22:12:30.960122 | orchestrator | 22:12:30.960 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-05 22:12:30.960157 | orchestrator | 22:12:30.960 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-05 22:12:30.960192 | orchestrator | 22:12:30.960 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.960206 | orchestrator | 22:12:30.960 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.960232 | orchestrator | 22:12:30.960 STDOUT terraform:  + config_drive = true 2025-07-05 22:12:30.960263 | orchestrator | 22:12:30.960 STDOUT terraform:  + created = (known after apply) 2025-07-05 22:12:30.960298 | orchestrator | 22:12:30.960 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-05 22:12:30.960327 | orchestrator | 22:12:30.960 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-05 22:12:30.960341 | orchestrator | 22:12:30.960 STDOUT terraform:  + force_delete = false 2025-07-05 22:12:30.960379 | orchestrator | 22:12:30.960 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-05 22:12:30.960456 | orchestrator | 22:12:30.960 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.960488 | orchestrator | 22:12:30.960 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.960525 | orchestrator | 22:12:30.960 STDOUT terraform:  + image_name = (known after apply) 2025-07-05 22:12:30.960552 | orchestrator | 22:12:30.960 STDOUT terraform:  + key_pair = "testbed" 2025-07-05 22:12:30.960584 | orchestrator | 22:12:30.960 STDOUT terraform:  + name = "testbed-manager" 2025-07-05 22:12:30.960598 | orchestrator | 22:12:30.960 STDOUT terraform:  + power_state = "active" 2025-07-05 22:12:30.960639 | orchestrator | 22:12:30.960 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.960673 | orchestrator | 22:12:30.960 STDOUT terraform:  + security_groups = (known after apply) 2025-07-05 22:12:30.960687 | orchestrator | 22:12:30.960 STDOUT terraform:  + stop_before_destroy = false 2025-07-05 22:12:30.960725 | orchestrator | 22:12:30.960 STDOUT terraform:  + updated = (known after apply) 2025-07-05 22:12:30.960755 | orchestrator | 22:12:30.960 STDOUT terraform:  + user_data = (sensitive value) 2025-07-05 22:12:30.960769 | orchestrator | 22:12:30.960 STDOUT terraform:  + block_device { 2025-07-05 22:12:30.960783 | orchestrator | 22:12:30.960 STDOUT terraform:  + boot_index = 0 2025-07-05 22:12:30.960818 | orchestrator | 22:12:30.960 STDOUT terraform:  + delete_on_termination = false 2025-07-05 22:12:30.960846 | orchestrator | 22:12:30.960 STDOUT terraform:  + destination_type = "volume" 2025-07-05 22:12:30.960875 | orchestrator | 22:12:30.960 STDOUT terraform:  + multiattach = false 2025-07-05 22:12:30.960906 | orchestrator | 22:12:30.960 STDOUT terraform:  + source_type = "volume" 2025-07-05 22:12:30.960943 | orchestrator | 22:12:30.960 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.960958 | orchestrator | 22:12:30.960 STDOUT terraform:  } 2025-07-05 22:12:30.960968 | orchestrator | 22:12:30.960 STDOUT terraform:  + network { 2025-07-05 22:12:30.960981 | orchestrator | 22:12:30.960 STDOUT terraform:  + access_network = false 2025-07-05 22:12:30.961010 | orchestrator | 22:12:30.960 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-05 22:12:30.961041 | orchestrator | 22:12:30.961 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-05 22:12:30.961072 | orchestrator | 22:12:30.961 STDOUT terraform:  + mac = (known after apply) 2025-07-05 22:12:30.961097 | orchestrator | 22:12:30.961 STDOUT terraform:  + name = (known after apply) 2025-07-05 22:12:30.961204 | orchestrator | 22:12:30.961 STDOUT terraform:  + port = (known after apply) 2025-07-05 22:12:30.961236 | orchestrator | 22:12:30.961 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.961282 | orchestrator | 22:12:30.961 STDOUT terraform:  } 2025-07-05 22:12:30.961297 | orchestrator | 22:12:30.961 STDOUT terraform:  } 2025-07-05 22:12:30.961350 | orchestrator | 22:12:30.961 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-05 22:12:30.961438 | orchestrator | 22:12:30.961 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-05 22:12:30.961477 | orchestrator | 22:12:30.961 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-05 22:12:30.961523 | orchestrator | 22:12:30.961 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-05 22:12:30.961557 | orchestrator | 22:12:30.961 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-05 22:12:30.961599 | orchestrator | 22:12:30.961 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.961628 | orchestrator | 22:12:30.961 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.961642 | orchestrator | 22:12:30.961 STDOUT terraform:  + config_drive = true 2025-07-05 22:12:30.961687 | orchestrator | 22:12:30.961 STDOUT terraform:  + created = (known after apply) 2025-07-05 22:12:30.961725 | orchestrator | 22:12:30.961 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-05 22:12:30.961759 | orchestrator | 22:12:30.961 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-05 22:12:30.961785 | orchestrator | 22:12:30.961 STDOUT terraform:  + force_delete = false 2025-07-05 22:12:30.961825 | orchestrator | 22:12:30.961 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-05 22:12:30.961864 | orchestrator | 22:12:30.961 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.961902 | orchestrator | 22:12:30.961 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.961939 | orchestrator | 22:12:30.961 STDOUT terraform:  + image_name = (known after apply) 2025-07-05 22:12:30.961968 | orchestrator | 22:12:30.961 STDOUT terraform:  + key_pair = "testbed" 2025-07-05 22:12:30.961999 | orchestrator | 22:12:30.961 STDOUT terraform:  + name = "testbed-node-0" 2025-07-05 22:12:30.962051 | orchestrator | 22:12:30.961 STDOUT terraform:  + power_state = "active" 2025-07-05 22:12:30.962085 | orchestrator | 22:12:30.962 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.962120 | orchestrator | 22:12:30.962 STDOUT terraform:  + security_groups = (known after apply) 2025-07-05 22:12:30.962135 | orchestrator | 22:12:30.962 STDOUT terraform:  + stop_before_destroy = false 2025-07-05 22:12:30.962194 | orchestrator | 22:12:30.962 STDOUT terraform:  + updated = (known after apply) 2025-07-05 22:12:30.962251 | orchestrator | 22:12:30.962 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-05 22:12:30.962265 | orchestrator | 22:12:30.962 STDOUT terraform:  + block_device { 2025-07-05 22:12:30.962288 | orchestrator | 22:12:30.962 STDOUT terraform:  + boot_index = 0 2025-07-05 22:12:30.962314 | orchestrator | 22:12:30.962 STDOUT terraform:  + delete_on_termination = false 2025-07-05 22:12:30.962364 | orchestrator | 22:12:30.962 STDOUT terraform:  + destination_type = "volume" 2025-07-05 22:12:30.962380 | orchestrator | 22:12:30.962 STDOUT terraform:  + multiattach = false 2025-07-05 22:12:30.962555 | orchestrator | 22:12:30.962 STDOUT terraform:  + source_type = "volume" 2025-07-05 22:12:30.962573 | orchestrator | 22:12:30.962 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.962583 | orchestrator | 22:12:30.962 STDOUT terraform:  } 2025-07-05 22:12:30.962594 | orchestrator | 22:12:30.962 STDOUT terraform:  + network { 2025-07-05 22:12:30.962604 | orchestrator | 22:12:30.962 STDOUT terraform:  + access_network = false 2025-07-05 22:12:30.962617 | orchestrator | 22:12:30.962 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-05 22:12:30.962627 | orchestrator | 22:12:30.962 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-05 22:12:30.962637 | orchestrator | 22:12:30.962 STDOUT terraform:  + mac = (known after apply) 2025-07-05 22:12:30.962650 | orchestrator | 22:12:30.962 STDOUT terraform:  + name = (known after apply) 2025-07-05 22:12:30.962663 | orchestrator | 22:12:30.962 STDOUT terraform:  + port = (known after apply) 2025-07-05 22:12:30.962700 | orchestrator | 22:12:30.962 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.962714 | orchestrator | 22:12:30.962 STDOUT terraform:  } 2025-07-05 22:12:30.962722 | orchestrator | 22:12:30.962 STDOUT terraform:  } 2025-07-05 22:12:30.962763 | orchestrator | 22:12:30.962 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-05 22:12:30.962806 | orchestrator | 22:12:30.962 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-05 22:12:30.962840 | orchestrator | 22:12:30.962 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-05 22:12:30.962874 | orchestrator | 22:12:30.962 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-05 22:12:30.962909 | orchestrator | 22:12:30.962 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-05 22:12:30.962945 | orchestrator | 22:12:30.962 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.962970 | orchestrator | 22:12:30.962 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.962981 | orchestrator | 22:12:30.962 STDOUT terraform:  + config_drive = true 2025-07-05 22:12:30.963021 | orchestrator | 22:12:30.962 STDOUT terraform:  + created = (known after apply) 2025-07-05 22:12:30.963056 | orchestrator | 22:12:30.963 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-05 22:12:30.963087 | orchestrator | 22:12:30.963 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-05 22:12:30.963110 | orchestrator | 22:12:30.963 STDOUT terraform:  + force_delete = false 2025-07-05 22:12:30.963151 | orchestrator | 22:12:30.963 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-05 22:12:30.963175 | orchestrator | 22:12:30.963 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.963209 | orchestrator | 22:12:30.963 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.963245 | orchestrator | 22:12:30.963 STDOUT terraform:  + image_name = (known after apply) 2025-07-05 22:12:30.963309 | orchestrator | 22:12:30.963 STDOUT terraform:  + key_pair = "testbed" 2025-07-05 22:12:30.963340 | orchestrator | 22:12:30.963 STDOUT terraform:  + name = "testbed-node-1" 2025-07-05 22:12:30.963366 | orchestrator | 22:12:30.963 STDOUT terraform:  + power_state = "active" 2025-07-05 22:12:30.963439 | orchestrator | 22:12:30.963 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.963470 | orchestrator | 22:12:30.963 STDOUT terraform:  + security_groups = (known after apply) 2025-07-05 22:12:30.963495 | orchestrator | 22:12:30.963 STDOUT terraform:  + stop_before_destroy = false 2025-07-05 22:12:30.963530 | orchestrator | 22:12:30.963 STDOUT terraform:  + updated = (known after apply) 2025-07-05 22:12:30.963581 | orchestrator | 22:12:30.963 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-05 22:12:30.963593 | orchestrator | 22:12:30.963 STDOUT terraform:  + block_device { 2025-07-05 22:12:30.963618 | orchestrator | 22:12:30.963 STDOUT terraform:  + boot_index = 0 2025-07-05 22:12:30.963651 | orchestrator | 22:12:30.963 STDOUT terraform:  + delete_on_termination = false 2025-07-05 22:12:30.963683 | orchestrator | 22:12:30.963 STDOUT terraform:  + destination_type = "volume" 2025-07-05 22:12:30.963712 | orchestrator | 22:12:30.963 STDOUT terraform:  + multiattach = false 2025-07-05 22:12:30.963742 | orchestrator | 22:12:30.963 STDOUT terraform:  + source_type = "volume" 2025-07-05 22:12:30.963780 | orchestrator | 22:12:30.963 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.963792 | orchestrator | 22:12:30.963 STDOUT terraform:  } 2025-07-05 22:12:30.963802 | orchestrator | 22:12:30.963 STDOUT terraform:  + network { 2025-07-05 22:12:30.963813 | orchestrator | 22:12:30.963 STDOUT terraform:  + access_network = false 2025-07-05 22:12:30.965310 | orchestrator | 22:12:30.963 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-05 22:12:30.965337 | orchestrator | 22:12:30.963 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-05 22:12:30.965345 | orchestrator | 22:12:30.963 STDOUT terraform:  + mac = (known after apply) 2025-07-05 22:12:30.965354 | orchestrator | 22:12:30.963 STDOUT terraform:  + name = (known after apply) 2025-07-05 22:12:30.965362 | orchestrator | 22:12:30.963 STDOUT terraform:  + port = (known after apply) 2025-07-05 22:12:30.965369 | orchestrator | 22:12:30.963 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.965377 | orchestrator | 22:12:30.963 STDOUT terraform:  } 2025-07-05 22:12:30.965408 | orchestrator | 22:12:30.963 STDOUT terraform:  } 2025-07-05 22:12:30.965417 | orchestrator | 22:12:30.963 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-05 22:12:30.965425 | orchestrator | 22:12:30.964 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-05 22:12:30.965444 | orchestrator | 22:12:30.964 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-05 22:12:30.965452 | orchestrator | 22:12:30.964 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-05 22:12:30.965460 | orchestrator | 22:12:30.964 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-05 22:12:30.965468 | orchestrator | 22:12:30.964 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.965475 | orchestrator | 22:12:30.964 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.965484 | orchestrator | 22:12:30.964 STDOUT terraform:  + config_drive = true 2025-07-05 22:12:30.965491 | orchestrator | 22:12:30.964 STDOUT terraform:  + created = (known after apply) 2025-07-05 22:12:30.965499 | orchestrator | 22:12:30.964 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-05 22:12:30.965507 | orchestrator | 22:12:30.964 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-05 22:12:30.965515 | orchestrator | 22:12:30.964 STDOUT terraform:  + force_delete = false 2025-07-05 22:12:30.965523 | orchestrator | 22:12:30.964 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-05 22:12:30.965531 | orchestrator | 22:12:30.964 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.965538 | orchestrator | 22:12:30.964 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.965546 | orchestrator | 22:12:30.964 STDOUT terraform:  + image_name = (known after apply) 2025-07-05 22:12:30.965554 | orchestrator | 22:12:30.964 STDOUT terraform:  + key_pair = "testbed" 2025-07-05 22:12:30.965562 | orchestrator | 22:12:30.964 STDOUT terraform:  + name = "testbed-node-2" 2025-07-05 22:12:30.965570 | orchestrator | 22:12:30.964 STDOUT terraform:  + power_state = "active" 2025-07-05 22:12:30.965577 | orchestrator | 22:12:30.964 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.965585 | orchestrator | 22:12:30.964 STDOUT terraform:  + security_groups = (known after apply) 2025-07-05 22:12:30.965593 | orchestrator | 22:12:30.964 STDOUT terraform:  + stop_before_destroy = false 2025-07-05 22:12:30.965601 | orchestrator | 22:12:30.964 STDOUT terraform:  + updated = (known after apply) 2025-07-05 22:12:30.965609 | orchestrator | 22:12:30.964 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-05 22:12:30.965617 | orchestrator | 22:12:30.964 STDOUT terraform:  + block_device { 2025-07-05 22:12:30.965625 | orchestrator | 22:12:30.964 STDOUT terraform:  + boot_index = 0 2025-07-05 22:12:30.965633 | orchestrator | 22:12:30.964 STDOUT terraform:  + delete_on_termination = false 2025-07-05 22:12:30.965648 | orchestrator | 22:12:30.964 STDOUT terraform:  + destination_type = "volume" 2025-07-05 22:12:30.965656 | orchestrator | 22:12:30.964 STDOUT terraform:  + multiattach = false 2025-07-05 22:12:30.965674 | orchestrator | 22:12:30.964 STDOUT terraform:  + source_type = "volume" 2025-07-05 22:12:30.965683 | orchestrator | 22:12:30.964 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.965696 | orchestrator | 22:12:30.964 STDOUT terraform:  } 2025-07-05 22:12:30.965704 | orchestrator | 22:12:30.964 STDOUT terraform:  + network { 2025-07-05 22:12:30.965711 | orchestrator | 22:12:30.964 STDOUT terraform:  + access_network = false 2025-07-05 22:12:30.965719 | orchestrator | 22:12:30.964 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-05 22:12:30.965727 | orchestrator | 22:12:30.964 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-05 22:12:30.965735 | orchestrator | 22:12:30.964 STDOUT terraform:  + mac = (known after apply) 2025-07-05 22:12:30.965742 | orchestrator | 22:12:30.964 STDOUT terraform:  + name = (known after apply) 2025-07-05 22:12:30.965750 | orchestrator | 22:12:30.964 STDOUT terraform:  + port = (known after apply) 2025-07-05 22:12:30.965758 | orchestrator | 22:12:30.964 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.965765 | orchestrator | 22:12:30.964 STDOUT terraform:  } 2025-07-05 22:12:30.965776 | orchestrator | 22:12:30.965 STDOUT terraform:  } 2025-07-05 22:12:30.965784 | orchestrator | 22:12:30.965 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-05 22:12:30.965792 | orchestrator | 22:12:30.965 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-05 22:12:30.965800 | orchestrator | 22:12:30.965 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-05 22:12:30.965808 | orchestrator | 22:12:30.965 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-05 22:12:30.965815 | orchestrator | 22:12:30.965 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-05 22:12:30.965823 | orchestrator | 22:12:30.965 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.965831 | orchestrator | 22:12:30.965 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.965839 | orchestrator | 22:12:30.965 STDOUT terraform:  + config_drive = true 2025-07-05 22:12:30.965846 | orchestrator | 22:12:30.965 STDOUT terraform:  + created = (known after apply) 2025-07-05 22:12:30.965854 | orchestrator | 22:12:30.965 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-05 22:12:30.965862 | orchestrator | 22:12:30.965 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-05 22:12:30.965870 | orchestrator | 22:12:30.965 STDOUT terraform:  + force_delete = false 2025-07-05 22:12:30.965877 | orchestrator | 22:12:30.965 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-05 22:12:30.965885 | orchestrator | 22:12:30.965 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.965893 | orchestrator | 22:12:30.965 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.965901 | orchestrator | 22:12:30.965 STDOUT terraform:  + image_name = (known after apply) 2025-07-05 22:12:30.965908 | orchestrator | 22:12:30.965 STDOUT terraform:  + key_pair = "testbed" 2025-07-05 22:12:30.965916 | orchestrator | 22:12:30.965 STDOUT terraform:  + name = "testbed-node-3" 2025-07-05 22:12:30.965924 | orchestrator | 22:12:30.965 STDOUT terraform:  + power_state = "active" 2025-07-05 22:12:30.965937 | orchestrator | 22:12:30.965 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.965945 | orchestrator | 22:12:30.965 STDOUT terraform:  + security_groups = (known after apply) 2025-07-05 22:12:30.965952 | orchestrator | 22:12:30.965 STDOUT terraform:  + stop_before_destroy = false 2025-07-05 22:12:30.965960 | orchestrator | 22:12:30.965 STDOUT terraform:  + updated = (known after apply) 2025-07-05 22:12:30.965971 | orchestrator | 22:12:30.965 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-05 22:12:30.965979 | orchestrator | 22:12:30.965 STDOUT terraform:  + block_device { 2025-07-05 22:12:30.965987 | orchestrator | 22:12:30.965 STDOUT terraform:  + boot_index = 0 2025-07-05 22:12:30.965995 | orchestrator | 22:12:30.965 STDOUT terraform:  + delete_on_termination = false 2025-07-05 22:12:30.966003 | orchestrator | 22:12:30.965 STDOUT terraform:  + destination_type = "volume" 2025-07-05 22:12:30.966010 | orchestrator | 22:12:30.965 STDOUT terraform:  + multiattach = false 2025-07-05 22:12:30.966044 | orchestrator | 22:12:30.965 STDOUT terraform:  + source_type = "volume" 2025-07-05 22:12:30.966054 | orchestrator | 22:12:30.965 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.966062 | orchestrator | 22:12:30.965 STDOUT terraform:  } 2025-07-05 22:12:30.966069 | orchestrator | 22:12:30.965 STDOUT terraform:  + network { 2025-07-05 22:12:30.966077 | orchestrator | 22:12:30.965 STDOUT terraform:  + access_network = false 2025-07-05 22:12:30.966085 | orchestrator | 22:12:30.965 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-05 22:12:30.966093 | orchestrator | 22:12:30.965 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-05 22:12:30.966101 | orchestrator | 22:12:30.965 STDOUT terraform:  + mac = (known after apply) 2025-07-05 22:12:30.966109 | orchestrator | 22:12:30.965 STDOUT terraform:  + name = (known after apply) 2025-07-05 22:12:30.966120 | orchestrator | 22:12:30.965 STDOUT terraform:  + port = (known after apply) 2025-07-05 22:12:30.966128 | orchestrator | 22:12:30.965 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.966136 | orchestrator | 22:12:30.966 STDOUT terraform:  } 2025-07-05 22:12:30.966144 | orchestrator | 22:12:30.966 STDOUT terraform:  } 2025-07-05 22:12:30.966152 | orchestrator | 22:12:30.966 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-05 22:12:30.966163 | orchestrator | 22:12:30.966 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-05 22:12:30.966171 | orchestrator | 22:12:30.966 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-05 22:12:30.966197 | orchestrator | 22:12:30.966 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-05 22:12:30.966230 | orchestrator | 22:12:30.966 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-05 22:12:30.966264 | orchestrator | 22:12:30.966 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.966292 | orchestrator | 22:12:30.966 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.966309 | orchestrator | 22:12:30.966 STDOUT terraform:  + config_drive = true 2025-07-05 22:12:30.966375 | orchestrator | 22:12:30.966 STDOUT terraform:  + created = (known after apply) 2025-07-05 22:12:30.966408 | orchestrator | 22:12:30.966 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-05 22:12:30.966420 | orchestrator | 22:12:30.966 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-05 22:12:30.966459 | orchestrator | 22:12:30.966 STDOUT terraform:  + force_delete = false 2025-07-05 22:12:30.966471 | orchestrator | 22:12:30.966 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-05 22:12:30.966514 | orchestrator | 22:12:30.966 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.966547 | orchestrator | 22:12:30.966 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.966582 | orchestrator | 22:12:30.966 STDOUT terraform:  + image_name = (known after apply) 2025-07-05 22:12:30.966594 | orchestrator | 22:12:30.966 STDOUT terraform:  + key_pair = "testbed" 2025-07-05 22:12:30.966627 | orchestrator | 22:12:30.966 STDOUT terraform:  + name = "testbed-node-4" 2025-07-05 22:12:30.966652 | orchestrator | 22:12:30.966 STDOUT terraform:  + power_state = "active" 2025-07-05 22:12:30.966686 | orchestrator | 22:12:30.966 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.966721 | orchestrator | 22:12:30.966 STDOUT terraform:  + security_groups = (known after apply) 2025-07-05 22:12:30.966744 | orchestrator | 22:12:30.966 STDOUT terraform:  + stop_before_destroy = false 2025-07-05 22:12:30.966779 | orchestrator | 22:12:30.966 STDOUT terraform:  + updated = (known after apply) 2025-07-05 22:12:30.966829 | orchestrator | 22:12:30.966 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-05 22:12:30.966841 | orchestrator | 22:12:30.966 STDOUT terraform:  + block_device { 2025-07-05 22:12:30.966863 | orchestrator | 22:12:30.966 STDOUT terraform:  + boot_index = 0 2025-07-05 22:12:30.966889 | orchestrator | 22:12:30.966 STDOUT terraform:  + delete_on_termination = false 2025-07-05 22:12:30.966919 | orchestrator | 22:12:30.966 STDOUT terraform:  + destination_type = "volume" 2025-07-05 22:12:30.966946 | orchestrator | 22:12:30.966 STDOUT terraform:  + multiattach = false 2025-07-05 22:12:30.966974 | orchestrator | 22:12:30.966 STDOUT terraform:  + source_type = "volume" 2025-07-05 22:12:30.967011 | orchestrator | 22:12:30.966 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.967022 | orchestrator | 22:12:30.967 STDOUT terraform:  } 2025-07-05 22:12:30.967033 | orchestrator | 22:12:30.967 STDOUT terraform:  + network { 2025-07-05 22:12:30.967044 | orchestrator | 22:12:30.967 STDOUT terraform:  + access_network = false 2025-07-05 22:12:30.967076 | orchestrator | 22:12:30.967 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-05 22:12:30.967106 | orchestrator | 22:12:30.967 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-05 22:12:30.967137 | orchestrator | 22:12:30.967 STDOUT terraform:  + mac = (known after apply) 2025-07-05 22:12:30.967169 | orchestrator | 22:12:30.967 STDOUT terraform:  + name = (known after apply) 2025-07-05 22:12:30.967200 | orchestrator | 22:12:30.967 STDOUT terraform:  + port = (known after apply) 2025-07-05 22:12:30.967230 | orchestrator | 22:12:30.967 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.967242 | orchestrator | 22:12:30.967 STDOUT terraform:  } 2025-07-05 22:12:30.967252 | orchestrator | 22:12:30.967 STDOUT terraform:  } 2025-07-05 22:12:30.967290 | orchestrator | 22:12:30.967 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-05 22:12:30.967330 | orchestrator | 22:12:30.967 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-05 22:12:30.967365 | orchestrator | 22:12:30.967 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-05 22:12:30.967420 | orchestrator | 22:12:30.967 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-05 22:12:30.967460 | orchestrator | 22:12:30.967 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-05 22:12:30.967495 | orchestrator | 22:12:30.967 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.967513 | orchestrator | 22:12:30.967 STDOUT terraform:  + availability_zone = "nova" 2025-07-05 22:12:30.967524 | orchestrator | 22:12:30.967 STDOUT terraform:  + config_drive = true 2025-07-05 22:12:30.967564 | orchestrator | 22:12:30.967 STDOUT terraform:  + created = (known after apply) 2025-07-05 22:12:30.967598 | orchestrator | 22:12:30.967 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-05 22:12:30.967629 | orchestrator | 22:12:30.967 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-05 22:12:30.967643 | orchestrator | 22:12:30.967 STDOUT terraform:  + force_delete = false 2025-07-05 22:12:30.967679 | orchestrator | 22:12:30.967 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-05 22:12:30.967715 | orchestrator | 22:12:30.967 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.967748 | orchestrator | 22:12:30.967 STDOUT terraform:  + image_id = (known after apply) 2025-07-05 22:12:30.967783 | orchestrator | 22:12:30.967 STDOUT terraform:  + image_name = (known after apply) 2025-07-05 22:12:30.967809 | orchestrator | 22:12:30.967 STDOUT terraform:  + key_pair = "testbed" 2025-07-05 22:12:30.967839 | orchestrator | 22:12:30.967 STDOUT terraform:  + name = "testbed-node-5" 2025-07-05 22:12:30.967852 | orchestrator | 22:12:30.967 STDOUT terraform:  + power_state = "active" 2025-07-05 22:12:30.967891 | orchestrator | 22:12:30.967 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.967925 | orchestrator | 22:12:30.967 STDOUT terraform:  + security_groups = (known after apply) 2025-07-05 22:12:30.967938 | orchestrator | 22:12:30.967 STDOUT terraform:  + stop_before_destroy = false 2025-07-05 22:12:30.967977 | orchestrator | 22:12:30.967 STDOUT terraform:  + updated = (known after apply) 2025-07-05 22:12:30.968024 | orchestrator | 22:12:30.967 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-05 22:12:30.968038 | orchestrator | 22:12:30.968 STDOUT terraform:  + block_device { 2025-07-05 22:12:30.968063 | orchestrator | 22:12:30.968 STDOUT terraform:  + boot_index = 0 2025-07-05 22:12:30.968076 | orchestrator | 22:12:30.968 STDOUT terraform:  + delete_on_termination = false 2025-07-05 22:12:30.968107 | orchestrator | 22:12:30.968 STDOUT terraform:  + destination_type = "volume" 2025-07-05 22:12:30.968135 | orchestrator | 22:12:30.968 STDOUT terraform:  + multiattach = false 2025-07-05 22:12:30.968164 | orchestrator | 22:12:30.968 STDOUT terraform:  + source_type = "volume" 2025-07-05 22:12:30.968200 | orchestrator | 22:12:30.968 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.968214 | orchestrator | 22:12:30.968 STDOUT terraform:  } 2025-07-05 22:12:30.968224 | orchestrator | 22:12:30.968 STDOUT terraform:  + network { 2025-07-05 22:12:30.968237 | orchestrator | 22:12:30.968 STDOUT terraform:  + access_network = false 2025-07-05 22:12:30.969452 | orchestrator | 22:12:30.968 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-05 22:12:30.969472 | orchestrator | 22:12:30.968 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-05 22:12:30.969482 | orchestrator | 22:12:30.968 STDOUT terraform:  + mac = (known after apply) 2025-07-05 22:12:30.969492 | orchestrator | 22:12:30.968 STDOUT terraform:  + name = (known after apply) 2025-07-05 22:12:30.969511 | orchestrator | 22:12:30.968 STDOUT terraform:  + port = (known after apply) 2025-07-05 22:12:30.969521 | orchestrator | 22:12:30.968 STDOUT terraform:  + uuid = (known after apply) 2025-07-05 22:12:30.969531 | orchestrator | 22:12:30.968 STDOUT terraform:  } 2025-07-05 22:12:30.969540 | orchestrator | 22:12:30.968 STDOUT terraform:  } 2025-07-05 22:12:30.969550 | orchestrator | 22:12:30.968 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-05 22:12:30.969560 | orchestrator | 22:12:30.968 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-05 22:12:30.969569 | orchestrator | 22:12:30.968 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-05 22:12:30.969579 | orchestrator | 22:12:30.968 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.969588 | orchestrator | 22:12:30.968 STDOUT terraform:  + name = "testbed" 2025-07-05 22:12:30.969598 | orchestrator | 22:12:30.968 STDOUT terraform:  + private_key = (sensitive value) 2025-07-05 22:12:30.969607 | orchestrator | 22:12:30.968 STDOUT terraform:  + public_key = (known after apply) 2025-07-05 22:12:30.969617 | orchestrator | 22:12:30.968 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.969631 | orchestrator | 22:12:30.968 STDOUT terraform:  + user_id = (known after apply) 2025-07-05 22:12:30.969641 | orchestrator | 22:12:30.968 STDOUT terraform:  } 2025-07-05 22:12:30.969650 | orchestrator | 22:12:30.968 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-05 22:12:30.969660 | orchestrator | 22:12:30.968 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-05 22:12:30.969670 | orchestrator | 22:12:30.968 STDOUT terraform:  + device = (known after apply) 2025-07-05 22:12:30.969679 | orchestrator | 22:12:30.968 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.969698 | orchestrator | 22:12:30.968 STDOUT terraform:  + instance_id = (known after apply) 2025-07-05 22:12:30.969707 | orchestrator | 22:12:30.968 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.969717 | orchestrator | 22:12:30.968 STDOUT terraform:  + volume_id = (known after apply) 2025-07-05 22:12:30.969726 | orchestrator | 22:12:30.968 STDOUT terraform:  } 2025-07-05 22:12:30.969736 | orchestrator | 22:12:30.968 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-05 22:12:30.969745 | orchestrator | 22:12:30.968 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-05 22:12:30.969755 | orchestrator | 22:12:30.968 STDOUT terraform:  + device = (known after apply) 2025-07-05 22:12:30.969764 | orchestrator | 22:12:30.968 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.969774 | orchestrator | 22:12:30.968 STDOUT terraform:  + instance_id = (known after apply) 2025-07-05 22:12:30.969783 | orchestrator | 22:12:30.969 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.969792 | orchestrator | 22:12:30.969 STDOUT terraform:  + volume_id = (known after apply) 2025-07-05 22:12:30.969802 | orchestrator | 22:12:30.969 STDOUT terraform:  } 2025-07-05 22:12:30.969812 | orchestrator | 22:12:30.969 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-05 22:12:30.969821 | orchestrator | 22:12:30.969 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-05 22:12:30.969831 | orchestrator | 22:12:30.969 STDOUT terraform:  + device = (known after apply) 2025-07-05 22:12:30.969849 | orchestrator | 22:12:30.969 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.969859 | orchestrator | 22:12:30.969 STDOUT terraform:  + instance_id = (known after apply) 2025-07-05 22:12:30.969868 | orchestrator | 22:12:30.969 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.969878 | orchestrator | 22:12:30.969 STDOUT terraform:  + volume_id = (known after apply) 2025-07-05 22:12:30.969887 | orchestrator | 22:12:30.969 STDOUT terraform:  } 2025-07-05 22:12:30.969897 | orchestrator | 22:12:30.969 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-05 22:12:30.969906 | orchestrator | 22:12:30.969 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-05 22:12:30.969916 | orchestrator | 22:12:30.969 STDOUT terraform:  + device = (known after apply) 2025-07-05 22:12:30.969925 | orchestrator | 22:12:30.969 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.969934 | orchestrator | 22:12:30.969 STDOUT terraform:  + instance_id = (known after apply) 2025-07-05 22:12:30.969944 | orchestrator | 22:12:30.969 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.969953 | orchestrator | 22:12:30.969 STDOUT terraform:  + volume_id = (known after apply) 2025-07-05 22:12:30.969962 | orchestrator | 22:12:30.969 STDOUT terraform:  } 2025-07-05 22:12:30.969972 | orchestrator | 22:12:30.969 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-05 22:12:30.969988 | orchestrator | 22:12:30.969 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-05 22:12:30.969998 | orchestrator | 22:12:30.969 STDOUT terraform:  + device = (known after apply) 2025-07-05 22:12:30.970007 | orchestrator | 22:12:30.969 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.970049 | orchestrator | 22:12:30.969 STDOUT terraform:  + instance_id = (known after apply) 2025-07-05 22:12:30.970061 | orchestrator | 22:12:30.969 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.970071 | orchestrator | 22:12:30.969 STDOUT terraform:  + volume_id = (known after apply) 2025-07-05 22:12:30.970742 | orchestrator | 22:12:30.969 STDOUT terraform:  } 2025-07-05 22:12:30.970802 | orchestrator | 22:12:30.970 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-05 22:12:30.970855 | orchestrator | 22:12:30.970 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-05 22:12:30.970869 | orchestrator | 22:12:30.970 STDOUT terraform:  + device = (known after apply) 2025-07-05 22:12:30.970903 | orchestrator | 22:12:30.970 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.970925 | orchestrator | 22:12:30.970 STDOUT terraform:  + instance_id = (known after apply) 2025-07-05 22:12:30.970956 | orchestrator | 22:12:30.970 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.970970 | orchestrator | 22:12:30.970 STDOUT terraform:  + volume_id = (known after apply) 2025-07-05 22:12:30.970982 | orchestrator | 22:12:30.970 STDOUT terraform:  } 2025-07-05 22:12:30.971050 | orchestrator | 22:12:30.970 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-05 22:12:30.971086 | orchestrator | 22:12:30.971 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-05 22:12:30.971099 | orchestrator | 22:12:30.971 STDOUT terraform:  + device = (known after apply) 2025-07-05 22:12:30.971131 | orchestrator | 22:12:30.971 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.971160 | orchestrator | 22:12:30.971 STDOUT terraform:  + instance_id = (known after apply) 2025-07-05 22:12:30.971189 | orchestrator | 22:12:30.971 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.971208 | orchestrator | 22:12:30.971 STDOUT terraform:  + volume_id = (known after apply) 2025-07-05 22:12:30.971219 | orchestrator | 22:12:30.971 STDOUT terraform:  } 2025-07-05 22:12:30.971280 | orchestrator | 22:12:30.971 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-05 22:12:30.971315 | orchestrator | 22:12:30.971 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-05 22:12:30.971328 | orchestrator | 22:12:30.971 STDOUT terraform:  + device = (known after apply) 2025-07-05 22:12:30.971366 | orchestrator | 22:12:30.971 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.971379 | orchestrator | 22:12:30.971 STDOUT terraform:  + instance_id = (known after apply) 2025-07-05 22:12:30.971441 | orchestrator | 22:12:30.971 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.971470 | orchestrator | 22:12:30.971 STDOUT terraform:  + volume_id = (known after apply) 2025-07-05 22:12:30.971479 | orchestrator | 22:12:30.971 STDOUT terraform:  } 2025-07-05 22:12:30.971525 | orchestrator | 22:12:30.971 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-05 22:12:30.971559 | orchestrator | 22:12:30.971 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-05 22:12:30.971598 | orchestrator | 22:12:30.971 STDOUT terraform:  + device = (known after apply) 2025-07-05 22:12:30.971610 | orchestrator | 22:12:30.971 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.971645 | orchestrator | 22:12:30.971 STDOUT terraform:  + instance_id = (known after apply) 2025-07-05 22:12:30.971683 | orchestrator | 22:12:30.971 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.971720 | orchestrator | 22:12:30.971 STDOUT terraform:  + volume_id = (known after apply) 2025-07-05 22:12:30.971747 | orchestrator | 22:12:30.971 STDOUT terraform:  } 2025-07-05 22:12:30.971793 | orchestrator | 22:12:30.971 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-05 22:12:30.971842 | orchestrator | 22:12:30.971 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-05 22:12:30.971871 | orchestrator | 22:12:30.971 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-05 22:12:30.971899 | orchestrator | 22:12:30.971 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-05 22:12:30.971910 | orchestrator | 22:12:30.971 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.971943 | orchestrator | 22:12:30.971 STDOUT terraform:  + port_id = (known after apply) 2025-07-05 22:12:30.971977 | orchestrator | 22:12:30.971 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.971987 | orchestrator | 22:12:30.971 STDOUT terraform:  } 2025-07-05 22:12:30.972028 | orchestrator | 22:12:30.971 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-05 22:12:30.972072 | orchestrator | 22:12:30.972 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-05 22:12:30.972084 | orchestrator | 22:12:30.972 STDOUT terraform:  + address = (known after apply) 2025-07-05 22:12:30.972117 | orchestrator | 22:12:30.972 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.972129 | orchestrator | 22:12:30.972 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-05 22:12:30.972175 | orchestrator | 22:12:30.972 STDOUT terraform:  + dns_name = (known after apply) 2025-07-05 22:12:30.972185 | orchestrator | 22:12:30.972 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-05 22:12:30.972195 | orchestrator | 22:12:30.972 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.972234 | orchestrator | 22:12:30.972 STDOUT terraform:  + pool = "public" 2025-07-05 22:12:30.972244 | orchestrator | 22:12:30.972 STDOUT terraform:  + port_id = (known after apply) 2025-07-05 22:12:30.972254 | orchestrator | 22:12:30.972 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.972283 | orchestrator | 22:12:30.972 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-05 22:12:30.972302 | orchestrator | 22:12:30.972 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.972312 | orchestrator | 22:12:30.972 STDOUT terraform:  } 2025-07-05 22:12:30.972361 | orchestrator | 22:12:30.972 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-05 22:12:30.972442 | orchestrator | 22:12:30.972 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-05 22:12:30.972456 | orchestrator | 22:12:30.972 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-05 22:12:30.972499 | orchestrator | 22:12:30.972 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.972509 | orchestrator | 22:12:30.972 STDOUT terraform:  + availability_zone_hints = [ 2025-07-05 22:12:30.972520 | orchestrator | 22:12:30.972 STDOUT terraform:  + "nova", 2025-07-05 22:12:30.972528 | orchestrator | 22:12:30.972 STDOUT terraform:  ] 2025-07-05 22:12:30.972560 | orchestrator | 22:12:30.972 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-05 22:12:30.972592 | orchestrator | 22:12:30.972 STDOUT terraform:  + external = (known after apply) 2025-07-05 22:12:30.972628 | orchestrator | 22:12:30.972 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.972676 | orchestrator | 22:12:30.972 STDOUT terraform:  + mtu = (known after apply) 2025-07-05 22:12:30.972701 | orchestrator | 22:12:30.972 STDOUT terraform:  + name = "net-testbed-management" 2025-07-05 22:12:30.972735 | orchestrator | 22:12:30.972 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-05 22:12:30.972771 | orchestrator | 22:12:30.972 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-05 22:12:30.972806 | orchestrator | 22:12:30.972 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.972834 | orchestrator | 22:12:30.972 STDOUT terraform:  + shared = (known after apply) 2025-07-05 22:12:30.972870 | orchestrator | 22:12:30.972 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.972908 | orchestrator | 22:12:30.972 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-05 22:12:30.972919 | orchestrator | 22:12:30.972 STDOUT terraform:  + segments (known after apply) 2025-07-05 22:12:30.972928 | orchestrator | 22:12:30.972 STDOUT terraform:  } 2025-07-05 22:12:30.972979 | orchestrator | 22:12:30.972 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-05 22:12:30.973028 | orchestrator | 22:12:30.972 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-05 22:12:30.973064 | orchestrator | 22:12:30.973 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-05 22:12:30.973092 | orchestrator | 22:12:30.973 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-05 22:12:30.973126 | orchestrator | 22:12:30.973 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-05 22:12:30.973160 | orchestrator | 22:12:30.973 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.973194 | orchestrator | 22:12:30.973 STDOUT terraform:  + device_id = (known after apply) 2025-07-05 22:12:30.973223 | orchestrator | 22:12:30.973 STDOUT terraform:  + device_owner = (known after apply) 2025-07-05 22:12:30.973259 | orchestrator | 22:12:30.973 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-05 22:12:30.973295 | orchestrator | 22:12:30.973 STDOUT terraform:  + dns_name = (known after apply) 2025-07-05 22:12:30.973329 | orchestrator | 22:12:30.973 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.973363 | orchestrator | 22:12:30.973 STDOUT terraform:  + mac_address = (known after apply) 2025-07-05 22:12:30.973420 | orchestrator | 22:12:30.973 STDOUT terraform:  + network_id = (known after apply) 2025-07-05 22:12:30.973432 | orchestrator | 22:12:30.973 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-05 22:12:30.973468 | orchestrator | 22:12:30.973 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-05 22:12:30.973504 | orchestrator | 22:12:30.973 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.973538 | orchestrator | 22:12:30.973 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-05 22:12:30.973572 | orchestrator | 22:12:30.973 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.973583 | orchestrator | 22:12:30.973 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.973625 | orchestrator | 22:12:30.973 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-05 22:12:30.973634 | orchestrator | 22:12:30.973 STDOUT terraform:  } 2025-07-05 22:12:30.973643 | orchestrator | 22:12:30.973 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.973678 | orchestrator | 22:12:30.973 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-05 22:12:30.973687 | orchestrator | 22:12:30.973 STDOUT terraform:  } 2025-07-05 22:12:30.973696 | orchestrator | 22:12:30.973 STDOUT terraform:  + binding (known after apply) 2025-07-05 22:12:30.973703 | orchestrator | 22:12:30.973 STDOUT terraform:  + fixed_ip { 2025-07-05 22:12:30.973727 | orchestrator | 22:12:30.973 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-05 22:12:30.973755 | orchestrator | 22:12:30.973 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-05 22:12:30.973766 | orchestrator | 22:12:30.973 STDOUT terraform:  } 2025-07-05 22:12:30.973773 | orchestrator | 22:12:30.973 STDOUT terraform:  } 2025-07-05 22:12:30.973828 | orchestrator | 22:12:30.973 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-05 22:12:30.973864 | orchestrator | 22:12:30.973 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-05 22:12:30.973893 | orchestrator | 22:12:30.973 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-05 22:12:30.973928 | orchestrator | 22:12:30.973 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-05 22:12:30.973962 | orchestrator | 22:12:30.973 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-05 22:12:30.973997 | orchestrator | 22:12:30.973 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.974066 | orchestrator | 22:12:30.973 STDOUT terraform:  + device_id = (known after apply) 2025-07-05 22:12:30.974083 | orchestrator | 22:12:30.974 STDOUT terraform:  + device_owner = (known after apply) 2025-07-05 22:12:30.974118 | orchestrator | 22:12:30.974 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-05 22:12:30.974153 | orchestrator | 22:12:30.974 STDOUT terraform:  + dns_name = (known after apply) 2025-07-05 22:12:30.974189 | orchestrator | 22:12:30.974 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.974226 | orchestrator | 22:12:30.974 STDOUT terraform:  + mac_address = (known after apply) 2025-07-05 22:12:30.974263 | orchestrator | 22:12:30.974 STDOUT terraform:  + network_id = (known after apply) 2025-07-05 22:12:30.974291 | orchestrator | 22:12:30.974 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-05 22:12:30.974325 | orchestrator | 22:12:30.974 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-05 22:12:30.974359 | orchestrator | 22:12:30.974 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.974421 | orchestrator | 22:12:30.974 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-05 22:12:30.974459 | orchestrator | 22:12:30.974 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.974470 | orchestrator | 22:12:30.974 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.974499 | orchestrator | 22:12:30.974 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-05 22:12:30.974509 | orchestrator | 22:12:30.974 STDOUT terraform:  } 2025-07-05 22:12:30.974518 | orchestrator | 22:12:30.974 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.974553 | orchestrator | 22:12:30.974 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-05 22:12:30.974563 | orchestrator | 22:12:30.974 STDOUT terraform:  } 2025-07-05 22:12:30.974572 | orchestrator | 22:12:30.974 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.974612 | orchestrator | 22:12:30.974 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-05 22:12:30.974620 | orchestrator | 22:12:30.974 STDOUT terraform:  } 2025-07-05 22:12:30.974629 | orchestrator | 22:12:30.974 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.974665 | orchestrator | 22:12:30.974 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-05 22:12:30.974673 | orchestrator | 22:12:30.974 STDOUT terraform:  } 2025-07-05 22:12:30.974682 | orchestrator | 22:12:30.974 STDOUT terraform:  + binding (known after apply) 2025-07-05 22:12:30.974691 | orchestrator | 22:12:30.974 STDOUT terraform:  + fixed_ip { 2025-07-05 22:12:30.974717 | orchestrator | 22:12:30.974 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-05 22:12:30.974744 | orchestrator | 22:12:30.974 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-05 22:12:30.974755 | orchestrator | 22:12:30.974 STDOUT terraform:  } 2025-07-05 22:12:30.974764 | orchestrator | 22:12:30.974 STDOUT terraform:  } 2025-07-05 22:12:30.974818 | orchestrator | 22:12:30.974 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-05 22:12:30.974855 | orchestrator | 22:12:30.974 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-05 22:12:30.974879 | orchestrator | 22:12:30.974 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-05 22:12:30.974916 | orchestrator | 22:12:30.974 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-05 22:12:30.974949 | orchestrator | 22:12:30.974 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-05 22:12:30.974988 | orchestrator | 22:12:30.974 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.975011 | orchestrator | 22:12:30.974 STDOUT terraform:  + device_id = (known after apply) 2025-07-05 22:12:30.975047 | orchestrator | 22:12:30.975 STDOUT terraform:  + device_owner = (known after apply) 2025-07-05 22:12:30.975082 | orchestrator | 22:12:30.975 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-05 22:12:30.975116 | orchestrator | 22:12:30.975 STDOUT terraform:  + dns_name = (known after apply) 2025-07-05 22:12:30.975152 | orchestrator | 22:12:30.975 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.975196 | orchestrator | 22:12:30.975 STDOUT terraform:  + mac_address = (known after apply) 2025-07-05 22:12:30.975233 | orchestrator | 22:12:30.975 STDOUT terraform:  + network_id = (known after apply) 2025-07-05 22:12:30.975243 | orchestrator | 22:12:30.975 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-05 22:12:30.975283 | orchestrator | 22:12:30.975 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-05 22:12:30.975318 | orchestrator | 22:12:30.975 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.975351 | orchestrator | 22:12:30.975 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-05 22:12:30.975406 | orchestrator | 22:12:30.975 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.975417 | orchestrator | 22:12:30.975 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.975453 | orchestrator | 22:12:30.975 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-05 22:12:30.975461 | orchestrator | 22:12:30.975 STDOUT terraform:  } 2025-07-05 22:12:30.975470 | orchestrator | 22:12:30.975 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.975495 | orchestrator | 22:12:30.975 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-05 22:12:30.975505 | orchestrator | 22:12:30.975 STDOUT terraform:  } 2025-07-05 22:12:30.975514 | orchestrator | 22:12:30.975 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.975683 | orchestrator | 22:12:30.975 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-05 22:12:30.975760 | orchestrator | 22:12:30.975 STDOUT terraform:  } 2025-07-05 22:12:30.975773 | orchestrator | 22:12:30.975 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.975782 | orchestrator | 22:12:30.975 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-05 22:12:30.975792 | orchestrator | 22:12:30.975 STDOUT terraform:  } 2025-07-05 22:12:30.975800 | orchestrator | 22:12:30.975 STDOUT terraform:  + binding (known after apply) 2025-07-05 22:12:30.975809 | orchestrator | 22:12:30.975 STDOUT terraform:  + fixed_ip { 2025-07-05 22:12:30.975850 | orchestrator | 22:12:30.975 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-05 22:12:30.975860 | orchestrator | 22:12:30.975 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-05 22:12:30.975869 | orchestrator | 22:12:30.975 STDOUT terraform:  } 2025-07-05 22:12:30.975879 | orchestrator | 22:12:30.975 STDOUT terraform:  } 2025-07-05 22:12:30.975890 | orchestrator | 22:12:30.975 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-05 22:12:30.975902 | orchestrator | 22:12:30.975 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-05 22:12:30.975913 | orchestrator | 22:12:30.975 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-05 22:12:30.975925 | orchestrator | 22:12:30.975 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-05 22:12:30.975940 | orchestrator | 22:12:30.975 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-05 22:12:30.975951 | orchestrator | 22:12:30.975 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.975962 | orchestrator | 22:12:30.975 STDOUT terraform:  + device_id = (known after apply) 2025-07-05 22:12:30.975976 | orchestrator | 22:12:30.975 STDOUT terraform:  + device_owner = (known after apply) 2025-07-05 22:12:30.975988 | orchestrator | 22:12:30.975 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-05 22:12:30.976082 | orchestrator | 22:12:30.975 STDOUT terraform:  + dns_name = (known after apply) 2025-07-05 22:12:30.976104 | orchestrator | 22:12:30.976 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.976116 | orchestrator | 22:12:30.976 STDOUT terraform:  + mac_address = (known after apply) 2025-07-05 22:12:30.976131 | orchestrator | 22:12:30.976 STDOUT terraform:  + network_id = (known after apply) 2025-07-05 22:12:30.976145 | orchestrator | 22:12:30.976 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-05 22:12:30.976178 | orchestrator | 22:12:30.976 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-05 22:12:30.976227 | orchestrator | 22:12:30.976 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.976244 | orchestrator | 22:12:30.976 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-05 22:12:30.976260 | orchestrator | 22:12:30.976 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.976318 | orchestrator | 22:12:30.976 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.976333 | orchestrator | 22:12:30.976 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-05 22:12:30.976345 | orchestrator | 22:12:30.976 STDOUT terraform:  } 2025-07-05 22:12:30.976360 | orchestrator | 22:12:30.976 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.976371 | orchestrator | 22:12:30.976 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-05 22:12:30.976535 | orchestrator | 22:12:30.976 STDOUT terraform:  } 2025-07-05 22:12:30.976561 | orchestrator | 22:12:30.976 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.976572 | orchestrator | 22:12:30.976 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-05 22:12:30.976596 | orchestrator | 22:12:30.976 STDOUT terraform:  } 2025-07-05 22:12:30.976607 | orchestrator | 22:12:30.976 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.976618 | orchestrator | 22:12:30.976 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-05 22:12:30.976629 | orchestrator | 22:12:30.976 STDOUT terraform:  } 2025-07-05 22:12:30.976639 | orchestrator | 22:12:30.976 STDOUT terraform:  + binding (known after apply) 2025-07-05 22:12:30.976650 | orchestrator | 22:12:30.976 STDOUT terraform:  + fixed_ip { 2025-07-05 22:12:30.976661 | orchestrator | 22:12:30.976 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-05 22:12:30.976675 | orchestrator | 22:12:30.976 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-05 22:12:30.976687 | orchestrator | 22:12:30.976 STDOUT terraform:  } 2025-07-05 22:12:30.976698 | orchestrator | 22:12:30.976 STDOUT terraform:  } 2025-07-05 22:12:30.976709 | orchestrator | 22:12:30.976 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-05 22:12:30.976720 | orchestrator | 22:12:30.976 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-05 22:12:30.976735 | orchestrator | 22:12:30.976 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-05 22:12:30.976747 | orchestrator | 22:12:30.976 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-05 22:12:30.976805 | orchestrator | 22:12:30.976 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-05 22:12:30.976819 | orchestrator | 22:12:30.976 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.976835 | orchestrator | 22:12:30.976 STDOUT terraform:  + device_id = (known after apply) 2025-07-05 22:12:30.976882 | orchestrator | 22:12:30.976 STDOUT terraform:  + device_owner = (known after apply) 2025-07-05 22:12:30.976912 | orchestrator | 22:12:30.976 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-05 22:12:30.976927 | orchestrator | 22:12:30.976 STDOUT terraform:  + dns_name = (known after apply) 2025-07-05 22:12:30.976967 | orchestrator | 22:12:30.976 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.977024 | orchestrator | 22:12:30.976 STDOUT terraform:  + mac_address = (known after apply) 2025-07-05 22:12:30.977041 | orchestrator | 22:12:30.976 STDOUT terraform:  + network_id = (known after apply) 2025-07-05 22:12:30.977055 | orchestrator | 22:12:30.977 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-05 22:12:30.977102 | orchestrator | 22:12:30.977 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-05 22:12:30.977119 | orchestrator | 22:12:30.977 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.977189 | orchestrator | 22:12:30.977 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-05 22:12:30.977204 | orchestrator | 22:12:30.977 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.977218 | orchestrator | 22:12:30.977 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.977233 | orchestrator | 22:12:30.977 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-05 22:12:30.977253 | orchestrator | 22:12:30.977 STDOUT terraform:  } 2025-07-05 22:12:30.977267 | orchestrator | 22:12:30.977 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.977278 | orchestrator | 22:12:30.977 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-05 22:12:30.977292 | orchestrator | 22:12:30.977 STDOUT terraform:  } 2025-07-05 22:12:30.977303 | orchestrator | 22:12:30.977 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.977317 | orchestrator | 22:12:30.977 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-05 22:12:30.977332 | orchestrator | 22:12:30.977 STDOUT terraform:  } 2025-07-05 22:12:30.977345 | orchestrator | 22:12:30.977 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.977360 | orchestrator | 22:12:30.977 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-05 22:12:30.977374 | orchestrator | 22:12:30.977 STDOUT terraform:  } 2025-07-05 22:12:30.977415 | orchestrator | 22:12:30.977 STDOUT terraform:  + binding (known after apply) 2025-07-05 22:12:30.977428 | orchestrator | 22:12:30.977 STDOUT terraform:  + fixed_ip { 2025-07-05 22:12:30.977442 | orchestrator | 22:12:30.977 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-05 22:12:30.977458 | orchestrator | 22:12:30.977 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-05 22:12:30.977472 | orchestrator | 22:12:30.977 STDOUT terraform:  } 2025-07-05 22:12:30.977483 | orchestrator | 22:12:30.977 STDOUT terraform:  } 2025-07-05 22:12:30.977544 | orchestrator | 22:12:30.977 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-05 22:12:30.977561 | orchestrator | 22:12:30.977 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-05 22:12:30.977600 | orchestrator | 22:12:30.977 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-05 22:12:30.977617 | orchestrator | 22:12:30.977 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-05 22:12:30.977662 | orchestrator | 22:12:30.977 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-05 22:12:30.977697 | orchestrator | 22:12:30.977 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.977713 | orchestrator | 22:12:30.977 STDOUT terraform:  + device_id = (known after apply) 2025-07-05 22:12:30.977766 | orchestrator | 22:12:30.977 STDOUT terraform:  + device_owner = (known after apply) 2025-07-05 22:12:30.977784 | orchestrator | 22:12:30.977 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-05 22:12:30.977819 | orchestrator | 22:12:30.977 STDOUT terraform:  + dns_name = (known after apply) 2025-07-05 22:12:30.977858 | orchestrator | 22:12:30.977 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.977875 | orchestrator | 22:12:30.977 STDOUT terraform:  + mac_address = (known after apply) 2025-07-05 22:12:30.977938 | orchestrator | 22:12:30.977 STDOUT terraform:  + network_id = (known after apply) 2025-07-05 22:12:30.977960 | orchestrator | 22:12:30.977 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-05 22:12:30.977983 | orchestrator | 22:12:30.977 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-05 22:12:30.977998 | orchestrator | 22:12:30.977 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.978378 | orchestrator | 22:12:30.977 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-05 22:12:30.978430 | orchestrator | 22:12:30.978 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.978442 | orchestrator | 22:12:30.978 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.978453 | orchestrator | 22:12:30.978 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-05 22:12:30.978464 | orchestrator | 22:12:30.978 STDOUT terraform:  } 2025-07-05 22:12:30.978475 | orchestrator | 22:12:30.978 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.978486 | orchestrator | 22:12:30.978 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-05 22:12:30.978497 | orchestrator | 22:12:30.978 STDOUT terraform:  } 2025-07-05 22:12:30.978526 | orchestrator | 22:12:30.978 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.978537 | orchestrator | 22:12:30.978 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-05 22:12:30.978548 | orchestrator | 22:12:30.978 STDOUT terraform:  } 2025-07-05 22:12:30.978559 | orchestrator | 22:12:30.978 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.978570 | orchestrator | 22:12:30.978 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-05 22:12:30.978581 | orchestrator | 22:12:30.978 STDOUT terraform:  } 2025-07-05 22:12:30.978591 | orchestrator | 22:12:30.978 STDOUT terraform:  + binding (known after apply) 2025-07-05 22:12:30.978602 | orchestrator | 22:12:30.978 STDOUT terraform:  + fixed_ip { 2025-07-05 22:12:30.978613 | orchestrator | 22:12:30.978 STDOUT terraform:  + ip_ad 2025-07-05 22:12:30.979314 | orchestrator | 22:12:30.979 STDOUT terraform: dress = "192.168.16.14" 2025-07-05 22:12:30.979338 | orchestrator | 22:12:30.979 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-05 22:12:30.979352 | orchestrator | 22:12:30.979 STDOUT terraform:  } 2025-07-05 22:12:30.979369 | orchestrator | 22:12:30.979 STDOUT terraform:  } 2025-07-05 22:12:30.979383 | orchestrator | 22:12:30.979 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-05 22:12:30.979438 | orchestrator | 22:12:30.979 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-05 22:12:30.979450 | orchestrator | 22:12:30.979 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-05 22:12:30.979478 | orchestrator | 22:12:30.979 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-05 22:12:30.979518 | orchestrator | 22:12:30.979 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-05 22:12:30.979533 | orchestrator | 22:12:30.979 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.979595 | orchestrator | 22:12:30.979 STDOUT terraform:  + device_id = (known after apply) 2025-07-05 22:12:30.979612 | orchestrator | 22:12:30.979 STDOUT terraform:  + device_owner = (known after apply) 2025-07-05 22:12:30.979646 | orchestrator | 22:12:30.979 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-05 22:12:30.979672 | orchestrator | 22:12:30.979 STDOUT terraform:  + dns_name = (known after apply) 2025-07-05 22:12:30.979705 | orchestrator | 22:12:30.979 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.979750 | orchestrator | 22:12:30.979 STDOUT terraform:  + mac_address = (known after apply) 2025-07-05 22:12:30.979765 | orchestrator | 22:12:30.979 STDOUT terraform:  + network_id = (known after apply) 2025-07-05 22:12:30.979799 | orchestrator | 22:12:30.979 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-05 22:12:30.979832 | orchestrator | 22:12:30.979 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-05 22:12:30.979876 | orchestrator | 22:12:30.979 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.979891 | orchestrator | 22:12:30.979 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-05 22:12:30.979924 | orchestrator | 22:12:30.979 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.979937 | orchestrator | 22:12:30.979 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.979950 | orchestrator | 22:12:30.979 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-05 22:12:30.979963 | orchestrator | 22:12:30.979 STDOUT terraform:  } 2025-07-05 22:12:30.979976 | orchestrator | 22:12:30.979 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.980010 | orchestrator | 22:12:30.979 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-05 22:12:30.980022 | orchestrator | 22:12:30.979 STDOUT terraform:  } 2025-07-05 22:12:30.980035 | orchestrator | 22:12:30.980 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.980047 | orchestrator | 22:12:30.980 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-05 22:12:30.980060 | orchestrator | 22:12:30.980 STDOUT terraform:  } 2025-07-05 22:12:30.980073 | orchestrator | 22:12:30.980 STDOUT terraform:  + allowed_address_pairs { 2025-07-05 22:12:30.980106 | orchestrator | 22:12:30.980 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-05 22:12:30.980117 | orchestrator | 22:12:30.980 STDOUT terraform:  } 2025-07-05 22:12:30.980130 | orchestrator | 22:12:30.980 STDOUT terraform:  + binding (known after apply) 2025-07-05 22:12:30.980140 | orchestrator | 22:12:30.980 STDOUT terraform:  + fixed_ip { 2025-07-05 22:12:30.980152 | orchestrator | 22:12:30.980 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-05 22:12:30.980187 | orchestrator | 22:12:30.980 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-05 22:12:30.980198 | orchestrator | 22:12:30.980 STDOUT terraform:  } 2025-07-05 22:12:30.980210 | orchestrator | 22:12:30.980 STDOUT terraform:  } 2025-07-05 22:12:30.980244 | orchestrator | 22:12:30.980 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-05 22:12:30.980292 | orchestrator | 22:12:30.980 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-05 22:12:30.980306 | orchestrator | 22:12:30.980 STDOUT terraform:  + force_destroy = false 2025-07-05 22:12:30.980319 | orchestrator | 22:12:30.980 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.980369 | orchestrator | 22:12:30.980 STDOUT terraform:  + port_id = (known after apply) 2025-07-05 22:12:30.980417 | orchestrator | 22:12:30.980 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.980442 | orchestrator | 22:12:30.980 STDOUT terraform:  + router_id = (known after apply) 2025-07-05 22:12:30.980459 | orchestrator | 22:12:30.980 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-05 22:12:30.980471 | orchestrator | 22:12:30.980 STDOUT terraform:  } 2025-07-05 22:12:30.980483 | orchestrator | 22:12:30.980 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-05 22:12:30.980497 | orchestrator | 22:12:30.980 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-05 22:12:30.980547 | orchestrator | 22:12:30.980 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-05 22:12:30.980563 | orchestrator | 22:12:30.980 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.980601 | orchestrator | 22:12:30.980 STDOUT terraform:  + availability_zone_hints = [ 2025-07-05 22:12:30.980613 | orchestrator | 22:12:30.980 STDOUT terraform:  + "nova", 2025-07-05 22:12:30.980626 | orchestrator | 22:12:30.980 STDOUT terraform:  ] 2025-07-05 22:12:30.980639 | orchestrator | 22:12:30.980 STDOUT terraform:  + distributed = (known after apply) 2025-07-05 22:12:30.980680 | orchestrator | 22:12:30.980 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-05 22:12:30.980736 | orchestrator | 22:12:30.980 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-05 22:12:30.980756 | orchestrator | 22:12:30.980 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-05 22:12:30.980791 | orchestrator | 22:12:30.980 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.980805 | orchestrator | 22:12:30.980 STDOUT terraform:  + name = "testbed" 2025-07-05 22:12:30.980850 | orchestrator | 22:12:30.980 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.980896 | orchestrator | 22:12:30.980 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.980907 | orchestrator | 22:12:30.980 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-05 22:12:30.980920 | orchestrator | 22:12:30.980 STDOUT terraform:  } 2025-07-05 22:12:30.980969 | orchestrator | 22:12:30.980 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-05 22:12:30.981020 | orchestrator | 22:12:30.980 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-05 22:12:30.981035 | orchestrator | 22:12:30.981 STDOUT terraform:  + description = "ssh" 2025-07-05 22:12:30.981078 | orchestrator | 22:12:30.981 STDOUT terraform:  + direction = "ingress" 2025-07-05 22:12:30.981089 | orchestrator | 22:12:30.981 STDOUT terraform:  + ethertype = "IPv4" 2025-07-05 22:12:30.981133 | orchestrator | 22:12:30.981 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.981145 | orchestrator | 22:12:30.981 STDOUT terraform:  + port_range_max = 22 2025-07-05 22:12:30.981166 | orchestrator | 22:12:30.981 STDOUT terraform:  + port_range_min = 22 2025-07-05 22:12:30.981176 | orchestrator | 22:12:30.981 STDOUT terraform:  + protocol = "tcp" 2025-07-05 22:12:30.981211 | orchestrator | 22:12:30.981 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.981236 | orchestrator | 22:12:30.981 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-05 22:12:30.981279 | orchestrator | 22:12:30.981 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-05 22:12:30.981293 | orchestrator | 22:12:30.981 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-05 22:12:30.981339 | orchestrator | 22:12:30.981 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-05 22:12:30.981404 | orchestrator | 22:12:30.981 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.981418 | orchestrator | 22:12:30.981 STDOUT terraform:  } 2025-07-05 22:12:30.981432 | orchestrator | 22:12:30.981 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-05 22:12:30.981526 | orchestrator | 22:12:30.981 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-05 22:12:30.981540 | orchestrator | 22:12:30.981 STDOUT terraform:  + description = "wireguard" 2025-07-05 22:12:30.981554 | orchestrator | 22:12:30.981 STDOUT terraform:  + direction = "ingress" 2025-07-05 22:12:30.981563 | orchestrator | 22:12:30.981 STDOUT terraform:  + ethertype = "IPv4" 2025-07-05 22:12:30.981607 | orchestrator | 22:12:30.981 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.981619 | orchestrator | 22:12:30.981 STDOUT terraform:  + port_range_max = 51820 2025-07-05 22:12:30.981631 | orchestrator | 22:12:30.981 STDOUT terraform:  + port_range_min = 51820 2025-07-05 22:12:30.981644 | orchestrator | 22:12:30.981 STDOUT terraform:  + protocol = "udp" 2025-07-05 22:12:30.981692 | orchestrator | 22:12:30.981 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.981708 | orchestrator | 22:12:30.981 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-05 22:12:30.981754 | orchestrator | 22:12:30.981 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-05 22:12:30.981766 | orchestrator | 22:12:30.981 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-05 22:12:30.981824 | orchestrator | 22:12:30.981 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-05 22:12:30.981863 | orchestrator | 22:12:30.981 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.981872 | orchestrator | 22:12:30.981 STDOUT terraform:  } 2025-07-05 22:12:30.981915 | orchestrator | 22:12:30.981 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-05 22:12:30.981966 | orchestrator | 22:12:30.981 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-05 22:12:30.981980 | orchestrator | 22:12:30.981 STDOUT terraform:  + direction = "ingress" 2025-07-05 22:12:30.982008 | orchestrator | 22:12:30.981 STDOUT terraform:  + ethertype = "IPv4" 2025-07-05 22:12:30.982050 | orchestrator | 22:12:30.981 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.982091 | orchestrator | 22:12:30.982 STDOUT terraform:  + protocol = "tcp" 2025-07-05 22:12:30.982121 | orchestrator | 22:12:30.982 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.982150 | orchestrator | 22:12:30.982 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-05 22:12:30.982187 | orchestrator | 22:12:30.982 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-05 22:12:30.982218 | orchestrator | 22:12:30.982 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-05 22:12:30.982248 | orchestrator | 22:12:30.982 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-05 22:12:30.982286 | orchestrator | 22:12:30.982 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.982296 | orchestrator | 22:12:30.982 STDOUT terraform:  } 2025-07-05 22:12:30.982347 | orchestrator | 22:12:30.982 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-05 22:12:30.982421 | orchestrator | 22:12:30.982 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-05 22:12:30.982461 | orchestrator | 22:12:30.982 STDOUT terraform:  + direction = "ingress" 2025-07-05 22:12:30.982473 | orchestrator | 22:12:30.982 STDOUT terraform:  + ethertype = "IPv4" 2025-07-05 22:12:30.982509 | orchestrator | 22:12:30.982 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.982522 | orchestrator | 22:12:30.982 STDOUT terraform:  + protocol = "udp" 2025-07-05 22:12:30.982567 | orchestrator | 22:12:30.982 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.982596 | orchestrator | 22:12:30.982 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-05 22:12:30.982632 | orchestrator | 22:12:30.982 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-05 22:12:30.982662 | orchestrator | 22:12:30.982 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-05 22:12:30.982699 | orchestrator | 22:12:30.982 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-05 22:12:30.982728 | orchestrator | 22:12:30.982 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.982737 | orchestrator | 22:12:30.982 STDOUT terraform:  } 2025-07-05 22:12:30.982790 | orchestrator | 22:12:30.982 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-05 22:12:30.982842 | orchestrator | 22:12:30.982 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-05 22:12:30.982854 | orchestrator | 22:12:30.982 STDOUT terraform:  + direction = "ingress" 2025-07-05 22:12:30.982889 | orchestrator | 22:12:30.982 STDOUT terraform:  + ethertype = "IPv4" 2025-07-05 22:12:30.982925 | orchestrator | 22:12:30.982 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.982937 | orchestrator | 22:12:30.982 STDOUT terraform:  + protocol = "icmp" 2025-07-05 22:12:30.982983 | orchestrator | 22:12:30.982 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.982995 | orchestrator | 22:12:30.982 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-05 22:12:30.983041 | orchestrator | 22:12:30.982 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-05 22:12:30.983053 | orchestrator | 22:12:30.983 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-05 22:12:30.983130 | orchestrator | 22:12:30.983 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-05 22:12:30.983144 | orchestrator | 22:12:30.983 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.983154 | orchestrator | 22:12:30.983 STDOUT terraform:  } 2025-07-05 22:12:30.983211 | orchestrator | 22:12:30.983 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-05 22:12:30.983259 | orchestrator | 22:12:30.983 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-05 22:12:30.983272 | orchestrator | 22:12:30.983 STDOUT terraform:  + direction = "ingress" 2025-07-05 22:12:30.983307 | orchestrator | 22:12:30.983 STDOUT terraform:  + ethertype = "IPv4" 2025-07-05 22:12:30.983336 | orchestrator | 22:12:30.983 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.983347 | orchestrator | 22:12:30.983 STDOUT terraform:  + protocol = "tcp" 2025-07-05 22:12:30.983418 | orchestrator | 22:12:30.983 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.983431 | orchestrator | 22:12:30.983 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-05 22:12:30.983470 | orchestrator | 22:12:30.983 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-05 22:12:30.983482 | orchestrator | 22:12:30.983 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-05 22:12:30.983529 | orchestrator | 22:12:30.983 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-05 22:12:30.983559 | orchestrator | 22:12:30.983 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.983567 | orchestrator | 22:12:30.983 STDOUT terraform:  } 2025-07-05 22:12:30.983616 | orchestrator | 22:12:30.983 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-05 22:12:30.983664 | orchestrator | 22:12:30.983 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-05 22:12:30.983677 | orchestrator | 22:12:30.983 STDOUT terraform:  + direction = "ingress" 2025-07-05 22:12:30.983717 | orchestrator | 22:12:30.983 STDOUT terraform:  + ethertype = "IPv4" 2025-07-05 22:12:30.983746 | orchestrator | 22:12:30.983 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.983759 | orchestrator | 22:12:30.983 STDOUT terraform:  + protocol = "udp" 2025-07-05 22:12:30.983803 | orchestrator | 22:12:30.983 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.983832 | orchestrator | 22:12:30.983 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-05 22:12:30.983859 | orchestrator | 22:12:30.983 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-05 22:12:30.983910 | orchestrator | 22:12:30.983 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-05 22:12:30.983923 | orchestrator | 22:12:30.983 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-05 22:12:30.983952 | orchestrator | 22:12:30.983 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.983961 | orchestrator | 22:12:30.983 STDOUT terraform:  } 2025-07-05 22:12:30.984011 | orchestrator | 22:12:30.983 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-05 22:12:30.984058 | orchestrator | 22:12:30.983 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-05 22:12:30.984070 | orchestrator | 22:12:30.984 STDOUT terraform:  + direction = "ingress" 2025-07-05 22:12:30.984106 | orchestrator | 22:12:30.984 STDOUT terraform:  + ethertype = "IPv4" 2025-07-05 22:12:30.984144 | orchestrator | 22:12:30.984 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.984156 | orchestrator | 22:12:30.984 STDOUT terraform:  + protocol = "icmp" 2025-07-05 22:12:30.984198 | orchestrator | 22:12:30.984 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.984228 | orchestrator | 22:12:30.984 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-05 22:12:30.984263 | orchestrator | 22:12:30.984 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-05 22:12:30.984275 | orchestrator | 22:12:30.984 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-05 22:12:30.984321 | orchestrator | 22:12:30.984 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-05 22:12:30.984351 | orchestrator | 22:12:30.984 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.984360 | orchestrator | 22:12:30.984 STDOUT terraform:  } 2025-07-05 22:12:30.984432 | orchestrator | 22:12:30.984 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-05 22:12:30.984471 | orchestrator | 22:12:30.984 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-05 22:12:30.984484 | orchestrator | 22:12:30.984 STDOUT terraform:  + description = "vrrp" 2025-07-05 22:12:30.984551 | orchestrator | 22:12:30.984 STDOUT terraform:  + direction = "ingress" 2025-07-05 22:12:30.984567 | orchestrator | 22:12:30.984 STDOUT terraform:  + ethertype = "IPv4" 2025-07-05 22:12:30.984576 | orchestrator | 22:12:30.984 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.984586 | orchestrator | 22:12:30.984 STDOUT terraform:  + protocol = "112" 2025-07-05 22:12:30.984617 | orchestrator | 22:12:30.984 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.984646 | orchestrator | 22:12:30.984 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-05 22:12:30.984684 | orchestrator | 22:12:30.984 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-05 22:12:30.984697 | orchestrator | 22:12:30.984 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-05 22:12:30.984740 | orchestrator | 22:12:30.984 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-05 22:12:30.984770 | orchestrator | 22:12:30.984 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.984781 | orchestrator | 22:12:30.984 STDOUT terraform:  } 2025-07-05 22:12:30.984829 | orchestrator | 22:12:30.984 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-05 22:12:30.984875 | orchestrator | 22:12:30.984 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-05 22:12:30.984888 | orchestrator | 22:12:30.984 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.984930 | orchestrator | 22:12:30.984 STDOUT terraform:  + description = "management security group" 2025-07-05 22:12:30.984942 | orchestrator | 22:12:30.984 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.984980 | orchestrator | 22:12:30.984 STDOUT terraform:  + name = "testbed-management" 2025-07-05 22:12:30.984993 | orchestrator | 22:12:30.984 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.985030 | orchestrator | 22:12:30.984 STDOUT terraform:  + stateful = (known after apply) 2025-07-05 22:12:30.985042 | orchestrator | 22:12:30.985 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.985053 | orchestrator | 22:12:30.985 STDOUT terraform:  } 2025-07-05 22:12:30.985108 | orchestrator | 22:12:30.985 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-05 22:12:30.985155 | orchestrator | 22:12:30.985 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-05 22:12:30.985168 | orchestrator | 22:12:30.985 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.985196 | orchestrator | 22:12:30.985 STDOUT terraform:  + description = "node security group" 2025-07-05 22:12:30.985208 | orchestrator | 22:12:30.985 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.985244 | orchestrator | 22:12:30.985 STDOUT terraform:  + name = "testbed-node" 2025-07-05 22:12:30.985257 | orchestrator | 22:12:30.985 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.985293 | orchestrator | 22:12:30.985 STDOUT terraform:  + stateful = (known after apply) 2025-07-05 22:12:30.985306 | orchestrator | 22:12:30.985 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.985316 | orchestrator | 22:12:30.985 STDOUT terraform:  } 2025-07-05 22:12:30.985367 | orchestrator | 22:12:30.985 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-05 22:12:30.985534 | orchestrator | 22:12:30.985 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-05 22:12:30.985584 | orchestrator | 22:12:30.985 STDOUT terraform:  + all_tags = (known after apply) 2025-07-05 22:12:30.985594 | orchestrator | 22:12:30.985 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-05 22:12:30.985601 | orchestrator | 22:12:30.985 STDOUT terraform:  + dns_nameservers = [ 2025-07-05 22:12:30.985614 | orchestrator | 22:12:30.985 STDOUT terraform:  + "8.8.8.8", 2025-07-05 22:12:30.985633 | orchestrator | 22:12:30.985 STDOUT terraform:  + "9.9.9.9", 2025-07-05 22:12:30.985639 | orchestrator | 22:12:30.985 STDOUT terraform:  ] 2025-07-05 22:12:30.985646 | orchestrator | 22:12:30.985 STDOUT terraform:  + enable_dhcp = true 2025-07-05 22:12:30.985652 | orchestrator | 22:12:30.985 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-05 22:12:30.985659 | orchestrator | 22:12:30.985 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.985665 | orchestrator | 22:12:30.985 STDOUT terraform:  + ip_version = 4 2025-07-05 22:12:30.985674 | orchestrator | 22:12:30.985 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-05 22:12:30.985680 | orchestrator | 22:12:30.985 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-05 22:12:30.985689 | orchestrator | 22:12:30.985 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-05 22:12:30.985809 | orchestrator | 22:12:30.985 STDOUT terraform:  + network_id = (known after apply) 2025-07-05 22:12:30.985840 | orchestrator | 22:12:30.985 STDOUT terraform:  + no_gateway = false 2025-07-05 22:12:30.985848 | orchestrator | 22:12:30.985 STDOUT terraform:  + region = (known after apply) 2025-07-05 22:12:30.985859 | orchestrator | 22:12:30.985 STDOUT terraform:  + service_types = (known after apply) 2025-07-05 22:12:30.985866 | orchestrator | 22:12:30.985 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-05 22:12:30.985873 | orchestrator | 22:12:30.985 STDOUT terraform:  + allocation_pool { 2025-07-05 22:12:30.985880 | orchestrator | 22:12:30.985 STDOUT terraform:  + end = "192.168.31.250" 2025-07-05 22:12:30.985890 | orchestrator | 22:12:30.985 STDOUT terraform:  + start = "192.168.31.200" 2025-07-05 22:12:30.985897 | orchestrator | 22:12:30.985 STDOUT terraform:  } 2025-07-05 22:12:30.985907 | orchestrator | 22:12:30.985 STDOUT terraform:  } 2025-07-05 22:12:30.985915 | orchestrator | 22:12:30.985 STDOUT terraform:  # terraform_data.image will be created 2025-07-05 22:12:30.985943 | orchestrator | 22:12:30.985 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-05 22:12:30.985953 | orchestrator | 22:12:30.985 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.985978 | orchestrator | 22:12:30.985 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-05 22:12:30.985988 | orchestrator | 22:12:30.985 STDOUT terraform:  + output = (known after apply) 2025-07-05 22:12:30.985997 | orchestrator | 22:12:30.985 STDOUT terraform:  } 2025-07-05 22:12:30.986050 | orchestrator | 22:12:30.985 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-05 22:12:30.986062 | orchestrator | 22:12:30.986 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-05 22:12:30.986091 | orchestrator | 22:12:30.986 STDOUT terraform:  + id = (known after apply) 2025-07-05 22:12:30.986101 | orchestrator | 22:12:30.986 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-05 22:12:30.986128 | orchestrator | 22:12:30.986 STDOUT terraform:  + output = (known after apply) 2025-07-05 22:12:30.986138 | orchestrator | 22:12:30.986 STDOUT terraform:  } 2025-07-05 22:12:30.986166 | orchestrator | 22:12:30.986 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-05 22:12:30.986185 | orchestrator | 22:12:30.986 STDOUT terraform: Changes to Outputs: 2025-07-05 22:12:30.986194 | orchestrator | 22:12:30.986 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-05 22:12:30.986222 | orchestrator | 22:12:30.986 STDOUT terraform:  + private_key = (sensitive value) 2025-07-05 22:12:31.193881 | orchestrator | 22:12:31.193 STDOUT terraform: terraform_data.image: Creating... 2025-07-05 22:12:31.195349 | orchestrator | 22:12:31.195 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-05 22:12:31.195883 | orchestrator | 22:12:31.195 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=5d899a3e-a4c1-9271-4b90-bcd205cb30d8] 2025-07-05 22:12:31.198736 | orchestrator | 22:12:31.198 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=3f7060e5-73b0-d958-a091-dae614154aff] 2025-07-05 22:12:31.212721 | orchestrator | 22:12:31.212 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-05 22:12:31.215906 | orchestrator | 22:12:31.215 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-05 22:12:31.221753 | orchestrator | 22:12:31.221 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-05 22:12:31.222237 | orchestrator | 22:12:31.222 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-05 22:12:31.222663 | orchestrator | 22:12:31.222 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-05 22:12:31.224245 | orchestrator | 22:12:31.224 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-05 22:12:31.225103 | orchestrator | 22:12:31.224 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-05 22:12:31.225520 | orchestrator | 22:12:31.225 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-05 22:12:31.228895 | orchestrator | 22:12:31.228 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-05 22:12:31.230683 | orchestrator | 22:12:31.230 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-05 22:12:31.677154 | orchestrator | 22:12:31.676 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-05 22:12:31.683097 | orchestrator | 22:12:31.682 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-05 22:12:31.687476 | orchestrator | 22:12:31.687 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-05 22:12:31.697120 | orchestrator | 22:12:31.697 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-05 22:12:31.728503 | orchestrator | 22:12:31.728 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-07-05 22:12:31.735477 | orchestrator | 22:12:31.735 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-05 22:12:37.308054 | orchestrator | 22:12:37.307 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=ed0b2c42-e8f5-452c-9616-a4439d6532ee] 2025-07-05 22:12:37.318067 | orchestrator | 22:12:37.317 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-05 22:12:41.224234 | orchestrator | 22:12:41.223 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-07-05 22:12:41.224429 | orchestrator | 22:12:41.224 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-07-05 22:12:41.226225 | orchestrator | 22:12:41.225 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-07-05 22:12:41.226463 | orchestrator | 22:12:41.226 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-07-05 22:12:41.226585 | orchestrator | 22:12:41.226 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-07-05 22:12:41.229367 | orchestrator | 22:12:41.229 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-07-05 22:12:41.687352 | orchestrator | 22:12:41.687 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-07-05 22:12:41.698797 | orchestrator | 22:12:41.698 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-07-05 22:12:41.736130 | orchestrator | 22:12:41.735 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-07-05 22:12:41.812788 | orchestrator | 22:12:41.812 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=21be9c94-8d55-4d0c-8ee7-a63f66622af7] 2025-07-05 22:12:41.821880 | orchestrator | 22:12:41.821 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-05 22:12:41.822580 | orchestrator | 22:12:41.822 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=5326e027-1676-4a37-b778-dc441a5dd20f] 2025-07-05 22:12:41.829132 | orchestrator | 22:12:41.828 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=8a7d49ca-9238-4676-a846-742ace525871] 2025-07-05 22:12:41.833024 | orchestrator | 22:12:41.832 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=04acd911-9b95-486d-a663-ed49966b13bc] 2025-07-05 22:12:41.833114 | orchestrator | 22:12:41.832 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-05 22:12:41.842066 | orchestrator | 22:12:41.841 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-05 22:12:41.842139 | orchestrator | 22:12:41.841 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-05 22:12:41.852654 | orchestrator | 22:12:41.852 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=19122c33-f71f-45f9-9cf9-313728601123] 2025-07-05 22:12:41.856233 | orchestrator | 22:12:41.856 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=b8c0761f-22b5-43a1-bf1b-76278e72919b] 2025-07-05 22:12:41.858848 | orchestrator | 22:12:41.858 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-05 22:12:41.860285 | orchestrator | 22:12:41.860 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-05 22:12:41.908510 | orchestrator | 22:12:41.907 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=ba536110-d8e3-4c62-9758-5989affe708c] 2025-07-05 22:12:41.909855 | orchestrator | 22:12:41.909 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=ed4648fa-96a1-4881-93bd-124d41734f11] 2025-07-05 22:12:41.923804 | orchestrator | 22:12:41.923 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-05 22:12:41.925962 | orchestrator | 22:12:41.925 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-05 22:12:41.929192 | orchestrator | 22:12:41.928 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=cfa7510ba9dc5d75a3ace5057ee61fcccfa6f17f] 2025-07-05 22:12:41.933052 | orchestrator | 22:12:41.932 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=b2794d5282e5aa9e61cb938a7b734d77a6299727] 2025-07-05 22:12:41.936253 | orchestrator | 22:12:41.936 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-05 22:12:41.943375 | orchestrator | 22:12:41.943 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=f21d976d-9ccd-433e-8515-86bf556b9e6c] 2025-07-05 22:12:47.320610 | orchestrator | 22:12:47.320 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-07-05 22:12:47.640801 | orchestrator | 22:12:47.640 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=8af9ded3-14bc-4604-a1eb-e76458d00fca] 2025-07-05 22:12:47.752141 | orchestrator | 22:12:47.751 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=f4ddc9e4-a6d7-4754-ae7a-d9ad1bb94cae] 2025-07-05 22:12:47.760376 | orchestrator | 22:12:47.760 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-05 22:12:51.822642 | orchestrator | 22:12:51.822 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-07-05 22:12:51.834138 | orchestrator | 22:12:51.833 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-07-05 22:12:51.843252 | orchestrator | 22:12:51.843 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-07-05 22:12:51.843521 | orchestrator | 22:12:51.843 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-07-05 22:12:51.859895 | orchestrator | 22:12:51.859 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-07-05 22:12:51.860986 | orchestrator | 22:12:51.860 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-07-05 22:12:52.175475 | orchestrator | 22:12:52.175 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=6d0cd1c5-87e4-438c-b8ef-3a341283ec1c] 2025-07-05 22:12:52.180489 | orchestrator | 22:12:52.180 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6] 2025-07-05 22:12:52.195814 | orchestrator | 22:12:52.195 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=8423e915-6ffe-427e-9e66-23a147af282b] 2025-07-05 22:12:52.239261 | orchestrator | 22:12:52.238 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=c08a8ddb-ede5-4204-aaab-cb049d6c9122] 2025-07-05 22:12:52.254653 | orchestrator | 22:12:52.254 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=e374bb14-455a-4ad8-82c6-811b57be8189] 2025-07-05 22:12:52.289033 | orchestrator | 22:12:52.288 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=3c0c1b5c-312e-43c8-bcec-97d16f18bccb] 2025-07-05 22:12:55.505841 | orchestrator | 22:12:55.505 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=f9420ed8-74d8-4e6d-a584-29f88b34a42c] 2025-07-05 22:12:55.512658 | orchestrator | 22:12:55.512 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-05 22:12:55.512776 | orchestrator | 22:12:55.512 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-05 22:12:55.513786 | orchestrator | 22:12:55.513 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-05 22:12:55.700626 | orchestrator | 22:12:55.700 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=22f51659-0cd2-4050-bba7-7aafb9a9a65a] 2025-07-05 22:12:55.710872 | orchestrator | 22:12:55.710 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-05 22:12:55.710977 | orchestrator | 22:12:55.710 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-05 22:12:55.712305 | orchestrator | 22:12:55.712 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-05 22:12:55.716344 | orchestrator | 22:12:55.716 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-05 22:12:55.716969 | orchestrator | 22:12:55.716 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-05 22:12:55.721892 | orchestrator | 22:12:55.721 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-05 22:12:55.724054 | orchestrator | 22:12:55.723 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=b7b599ce-2ade-4560-9d61-1de9ebef940c] 2025-07-05 22:12:55.728315 | orchestrator | 22:12:55.728 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-05 22:12:55.728361 | orchestrator | 22:12:55.728 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-05 22:12:55.728795 | orchestrator | 22:12:55.728 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-05 22:12:55.859153 | orchestrator | 22:12:55.858 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=b9e0d6bf-c9b0-4378-9632-7c11d6b0a82a] 2025-07-05 22:12:55.874538 | orchestrator | 22:12:55.874 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-05 22:12:55.944112 | orchestrator | 22:12:55.943 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=fdf9b13f-827e-48f6-bd77-8f3b9adcb22e] 2025-07-05 22:12:55.957986 | orchestrator | 22:12:55.957 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-05 22:12:56.078239 | orchestrator | 22:12:56.077 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=bf9e40d0-22dc-4fbb-863c-4f7b460d10ca] 2025-07-05 22:12:56.092660 | orchestrator | 22:12:56.092 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-05 22:12:56.119747 | orchestrator | 22:12:56.119 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=4dcb9aca-3296-4998-b860-0f31b877278b] 2025-07-05 22:12:56.132821 | orchestrator | 22:12:56.132 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-05 22:12:56.260291 | orchestrator | 22:12:56.259 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=3036fcb7-d5a4-4a8b-a4a9-8c2e85c03ecb] 2025-07-05 22:12:56.266761 | orchestrator | 22:12:56.266 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-05 22:12:56.311487 | orchestrator | 22:12:56.311 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=e39e7293-4a6f-483e-a316-94fa37800d04] 2025-07-05 22:12:56.324545 | orchestrator | 22:12:56.324 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-05 22:12:56.427736 | orchestrator | 22:12:56.427 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=e870abbc-5cb4-451a-b81d-126da160dee3] 2025-07-05 22:12:56.442673 | orchestrator | 22:12:56.442 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-05 22:12:56.522542 | orchestrator | 22:12:56.522 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=64ac5e86-51c1-4660-a104-16f3e79a74a1] 2025-07-05 22:12:56.653818 | orchestrator | 22:12:56.653 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=c712ca72-f1c1-4fc9-9368-ecd63d22321d] 2025-07-05 22:13:01.373998 | orchestrator | 22:13:01.373 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=edda0f07-e8b1-4f0b-8811-ecb226798708] 2025-07-05 22:13:01.466758 | orchestrator | 22:13:01.466 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=1a7b80e9-a8dd-474e-95a2-42499a4d66fa] 2025-07-05 22:13:01.649581 | orchestrator | 22:13:01.649 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=aa391646-5ebc-411a-ab3c-34eef678b1be] 2025-07-05 22:13:01.801960 | orchestrator | 22:13:01.801 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=8eb5cdea-55fb-42f1-b7c8-bd40204ad7e3] 2025-07-05 22:13:01.940503 | orchestrator | 22:13:01.940 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=8945ebab-ea7d-4298-847a-bd28099540a8] 2025-07-05 22:13:01.962307 | orchestrator | 22:13:01.961 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=b690ffb1-0c00-4c9b-a525-a41ed42e3b11] 2025-07-05 22:13:02.649555 | orchestrator | 22:13:02.649 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 7s [id=6ba00f36-c980-4373-9cca-994544bb7791] 2025-07-05 22:13:03.204790 | orchestrator | 22:13:03.204 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=b5d59c16-1430-4788-b8e1-05e4f3b2d474] 2025-07-05 22:13:03.230610 | orchestrator | 22:13:03.230 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-05 22:13:03.247296 | orchestrator | 22:13:03.247 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-05 22:13:03.248268 | orchestrator | 22:13:03.248 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-05 22:13:03.253078 | orchestrator | 22:13:03.252 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-05 22:13:03.261441 | orchestrator | 22:13:03.259 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-05 22:13:03.261499 | orchestrator | 22:13:03.261 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-05 22:13:03.263462 | orchestrator | 22:13:03.263 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-05 22:13:09.522960 | orchestrator | 22:13:09.522 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=9705385b-f51d-4cf6-a685-4bc85ba9dbc9] 2025-07-05 22:13:09.532285 | orchestrator | 22:13:09.531 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-05 22:13:09.537247 | orchestrator | 22:13:09.536 STDOUT terraform: local_file.inventory: Creating... 2025-07-05 22:13:09.539230 | orchestrator | 22:13:09.539 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-05 22:13:09.540831 | orchestrator | 22:13:09.540 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=6a8ed5e08a2dfc1d9a7a6958db78a68b60a8cee3] 2025-07-05 22:13:09.542868 | orchestrator | 22:13:09.542 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=9a406fd331cb422ecdb6995d61a3d24a753ec525] 2025-07-05 22:13:10.919014 | orchestrator | 22:13:10.918 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=9705385b-f51d-4cf6-a685-4bc85ba9dbc9] 2025-07-05 22:13:13.248929 | orchestrator | 22:13:13.248 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-05 22:13:13.255057 | orchestrator | 22:13:13.254 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-05 22:13:13.255152 | orchestrator | 22:13:13.254 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-05 22:13:13.261359 | orchestrator | 22:13:13.261 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-05 22:13:13.264817 | orchestrator | 22:13:13.264 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-05 22:13:13.264862 | orchestrator | 22:13:13.264 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-05 22:13:23.250289 | orchestrator | 22:13:23.249 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-05 22:13:23.256154 | orchestrator | 22:13:23.255 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-05 22:13:23.256257 | orchestrator | 22:13:23.256 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-05 22:13:23.262415 | orchestrator | 22:13:23.262 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-05 22:13:23.265794 | orchestrator | 22:13:23.265 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-05 22:13:23.265908 | orchestrator | 22:13:23.265 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-05 22:13:33.253239 | orchestrator | 22:13:33.252 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-05 22:13:33.256440 | orchestrator | 22:13:33.256 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-05 22:13:33.257326 | orchestrator | 22:13:33.256 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-05 22:13:33.263047 | orchestrator | 22:13:33.262 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-05 22:13:33.266489 | orchestrator | 22:13:33.266 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-05 22:13:33.266578 | orchestrator | 22:13:33.266 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-05 22:13:33.727494 | orchestrator | 22:13:33.727 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=2e218ff3-e120-4390-9711-be965aa90021] 2025-07-05 22:13:33.784707 | orchestrator | 22:13:33.784 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=c55c91fd-eb60-433a-8d12-82c991764079] 2025-07-05 22:13:33.828289 | orchestrator | 22:13:33.827 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=2ae67a62-b612-44e4-b176-baf34d069315] 2025-07-05 22:13:33.846215 | orchestrator | 22:13:33.845 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=6d81e0ea-56f0-47d1-9042-ab16bb85e780] 2025-07-05 22:13:34.293139 | orchestrator | 22:13:34.292 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=f9a24572-c751-4f67-b2e0-d4d07f0af481] 2025-07-05 22:13:34.417991 | orchestrator | 22:13:34.417 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=5f49b147-5ca2-49a4-a235-d2a6069c0811] 2025-07-05 22:13:34.436540 | orchestrator | 22:13:34.436 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-05 22:13:34.455053 | orchestrator | 22:13:34.454 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6798581783577320699] 2025-07-05 22:13:34.456882 | orchestrator | 22:13:34.456 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-05 22:13:34.457092 | orchestrator | 22:13:34.456 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-05 22:13:34.457599 | orchestrator | 22:13:34.457 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-05 22:13:34.457830 | orchestrator | 22:13:34.457 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-05 22:13:34.458056 | orchestrator | 22:13:34.457 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-05 22:13:34.458240 | orchestrator | 22:13:34.458 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-05 22:13:34.459917 | orchestrator | 22:13:34.459 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-05 22:13:34.466895 | orchestrator | 22:13:34.466 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-05 22:13:34.473728 | orchestrator | 22:13:34.473 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-05 22:13:34.482139 | orchestrator | 22:13:34.481 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-05 22:13:39.767645 | orchestrator | 22:13:39.766 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=5f49b147-5ca2-49a4-a235-d2a6069c0811/b8c0761f-22b5-43a1-bf1b-76278e72919b] 2025-07-05 22:13:39.792577 | orchestrator | 22:13:39.792 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=c55c91fd-eb60-433a-8d12-82c991764079/f21d976d-9ccd-433e-8515-86bf556b9e6c] 2025-07-05 22:13:39.798918 | orchestrator | 22:13:39.798 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=6d81e0ea-56f0-47d1-9042-ab16bb85e780/21be9c94-8d55-4d0c-8ee7-a63f66622af7] 2025-07-05 22:13:39.821836 | orchestrator | 22:13:39.821 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=c55c91fd-eb60-433a-8d12-82c991764079/ba536110-d8e3-4c62-9758-5989affe708c] 2025-07-05 22:13:39.831710 | orchestrator | 22:13:39.831 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=5f49b147-5ca2-49a4-a235-d2a6069c0811/04acd911-9b95-486d-a663-ed49966b13bc] 2025-07-05 22:13:39.850354 | orchestrator | 22:13:39.849 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=6d81e0ea-56f0-47d1-9042-ab16bb85e780/ed4648fa-96a1-4881-93bd-124d41734f11] 2025-07-05 22:13:39.864778 | orchestrator | 22:13:39.864 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=5f49b147-5ca2-49a4-a235-d2a6069c0811/19122c33-f71f-45f9-9cf9-313728601123] 2025-07-05 22:13:39.872127 | orchestrator | 22:13:39.871 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=c55c91fd-eb60-433a-8d12-82c991764079/8a7d49ca-9238-4676-a846-742ace525871] 2025-07-05 22:13:39.887233 | orchestrator | 22:13:39.886 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=6d81e0ea-56f0-47d1-9042-ab16bb85e780/5326e027-1676-4a37-b778-dc441a5dd20f] 2025-07-05 22:13:44.476356 | orchestrator | 22:13:44.476 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-05 22:13:54.476717 | orchestrator | 22:13:54.476 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-05 22:13:54.951809 | orchestrator | 22:13:54.951 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=76f68f9d-82a6-46bc-881f-d351665fbcc1] 2025-07-05 22:13:55.006580 | orchestrator | 22:13:55.006 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-05 22:13:55.006642 | orchestrator | 22:13:55.006 STDOUT terraform: Outputs: 2025-07-05 22:13:55.006649 | orchestrator | 22:13:55.006 STDOUT terraform: manager_address = 2025-07-05 22:13:55.006655 | orchestrator | 22:13:55.006 STDOUT terraform: private_key = 2025-07-05 22:13:55.376403 | orchestrator | ok: Runtime: 0:01:32.762229 2025-07-05 22:13:55.410881 | 2025-07-05 22:13:55.411019 | TASK [Create infrastructure (stable)] 2025-07-05 22:13:55.953891 | orchestrator | skipping: Conditional result was False 2025-07-05 22:13:55.962886 | 2025-07-05 22:13:55.963023 | TASK [Fetch manager address] 2025-07-05 22:13:56.505828 | orchestrator | ok 2025-07-05 22:13:56.516515 | 2025-07-05 22:13:56.516675 | TASK [Set manager_host address] 2025-07-05 22:13:56.588503 | orchestrator | ok 2025-07-05 22:13:56.595558 | 2025-07-05 22:13:56.595689 | LOOP [Update ansible collections] 2025-07-05 22:13:57.587769 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-05 22:13:57.588560 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-05 22:13:57.588769 | orchestrator | Starting galaxy collection install process 2025-07-05 22:13:57.588916 | orchestrator | Process install dependency map 2025-07-05 22:13:57.588954 | orchestrator | Starting collection install process 2025-07-05 22:13:57.589137 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-07-05 22:13:57.589189 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-07-05 22:13:57.589332 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-05 22:13:57.589532 | orchestrator | ok: Item: commons Runtime: 0:00:00.691580 2025-07-05 22:13:58.471965 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-05 22:13:58.472198 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-05 22:13:58.472278 | orchestrator | Starting galaxy collection install process 2025-07-05 22:13:58.472338 | orchestrator | Process install dependency map 2025-07-05 22:13:58.472392 | orchestrator | Starting collection install process 2025-07-05 22:13:58.472442 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-07-05 22:13:58.472495 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-07-05 22:13:58.472544 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-05 22:13:58.472617 | orchestrator | ok: Item: services Runtime: 0:00:00.633190 2025-07-05 22:13:58.487376 | 2025-07-05 22:13:58.487512 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-05 22:14:09.068007 | orchestrator | ok 2025-07-05 22:14:09.078288 | 2025-07-05 22:14:09.078416 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-05 22:15:09.122972 | orchestrator | ok 2025-07-05 22:15:09.135663 | 2025-07-05 22:15:09.135817 | TASK [Fetch manager ssh hostkey] 2025-07-05 22:15:10.713978 | orchestrator | Output suppressed because no_log was given 2025-07-05 22:15:10.729015 | 2025-07-05 22:15:10.729203 | TASK [Get ssh keypair from terraform environment] 2025-07-05 22:15:11.270172 | orchestrator | ok: Runtime: 0:00:00.007148 2025-07-05 22:15:11.292252 | 2025-07-05 22:15:11.292438 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-05 22:15:11.333983 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-05 22:15:11.344557 | 2025-07-05 22:15:11.344693 | TASK [Run manager part 0] 2025-07-05 22:15:12.245185 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-05 22:15:12.292736 | orchestrator | 2025-07-05 22:15:12.292826 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-05 22:15:12.292843 | orchestrator | 2025-07-05 22:15:12.292872 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-05 22:15:14.000541 | orchestrator | ok: [testbed-manager] 2025-07-05 22:15:14.000590 | orchestrator | 2025-07-05 22:15:14.000609 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-05 22:15:14.000618 | orchestrator | 2025-07-05 22:15:14.000626 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-05 22:15:15.878604 | orchestrator | ok: [testbed-manager] 2025-07-05 22:15:15.878671 | orchestrator | 2025-07-05 22:15:15.878685 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-05 22:15:16.495038 | orchestrator | ok: [testbed-manager] 2025-07-05 22:15:16.495117 | orchestrator | 2025-07-05 22:15:16.495134 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-05 22:15:16.533182 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:15:16.533260 | orchestrator | 2025-07-05 22:15:16.533281 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-05 22:15:16.559318 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:15:16.559404 | orchestrator | 2025-07-05 22:15:16.559416 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-05 22:15:16.588641 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:15:16.588689 | orchestrator | 2025-07-05 22:15:16.588698 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-05 22:15:16.619291 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:15:16.619459 | orchestrator | 2025-07-05 22:15:16.619477 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-05 22:15:16.650733 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:15:16.650789 | orchestrator | 2025-07-05 22:15:16.650798 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-05 22:15:16.685964 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:15:16.686050 | orchestrator | 2025-07-05 22:15:16.686064 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-05 22:15:16.713653 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:15:16.713714 | orchestrator | 2025-07-05 22:15:16.713724 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-05 22:15:17.501114 | orchestrator | changed: [testbed-manager] 2025-07-05 22:15:17.501160 | orchestrator | 2025-07-05 22:15:17.501166 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-05 22:18:39.710302 | orchestrator | changed: [testbed-manager] 2025-07-05 22:18:39.711923 | orchestrator | 2025-07-05 22:18:39.711948 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-05 22:20:08.130946 | orchestrator | changed: [testbed-manager] 2025-07-05 22:20:08.131051 | orchestrator | 2025-07-05 22:20:08.131079 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-05 22:20:27.597083 | orchestrator | changed: [testbed-manager] 2025-07-05 22:20:27.597240 | orchestrator | 2025-07-05 22:20:27.597262 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-05 22:20:36.120914 | orchestrator | changed: [testbed-manager] 2025-07-05 22:20:36.120957 | orchestrator | 2025-07-05 22:20:36.120965 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-05 22:20:36.168808 | orchestrator | ok: [testbed-manager] 2025-07-05 22:20:36.168847 | orchestrator | 2025-07-05 22:20:36.168856 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-05 22:20:36.970982 | orchestrator | ok: [testbed-manager] 2025-07-05 22:20:36.971050 | orchestrator | 2025-07-05 22:20:36.971063 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-05 22:20:37.702774 | orchestrator | changed: [testbed-manager] 2025-07-05 22:20:37.702964 | orchestrator | 2025-07-05 22:20:37.702982 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-05 22:20:44.134875 | orchestrator | changed: [testbed-manager] 2025-07-05 22:20:44.134973 | orchestrator | 2025-07-05 22:20:44.135012 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-05 22:20:50.050938 | orchestrator | changed: [testbed-manager] 2025-07-05 22:20:50.051027 | orchestrator | 2025-07-05 22:20:50.051047 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-05 22:20:52.726741 | orchestrator | changed: [testbed-manager] 2025-07-05 22:20:52.726846 | orchestrator | 2025-07-05 22:20:52.726872 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-05 22:20:54.512976 | orchestrator | changed: [testbed-manager] 2025-07-05 22:20:54.513066 | orchestrator | 2025-07-05 22:20:54.513083 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-05 22:20:55.665562 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-05 22:20:55.665660 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-05 22:20:55.665685 | orchestrator | 2025-07-05 22:20:55.665707 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-05 22:20:55.708279 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-05 22:20:55.708374 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-05 22:20:55.708389 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-05 22:20:55.708404 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-05 22:20:59.652626 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-05 22:20:59.652695 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-05 22:20:59.652705 | orchestrator | 2025-07-05 22:20:59.652714 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-05 22:21:00.233934 | orchestrator | changed: [testbed-manager] 2025-07-05 22:21:00.234056 | orchestrator | 2025-07-05 22:21:00.234079 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-05 22:22:06.821224 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-05 22:22:06.821313 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-05 22:22:06.821325 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-05 22:22:06.821334 | orchestrator | 2025-07-05 22:22:06.821343 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-05 22:22:09.062171 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-05 22:22:09.062233 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-05 22:22:09.062240 | orchestrator | 2025-07-05 22:22:09.062246 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-05 22:22:09.062253 | orchestrator | 2025-07-05 22:22:09.062258 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-05 22:22:10.427654 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:10.427725 | orchestrator | 2025-07-05 22:22:10.427743 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-05 22:22:10.476449 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:10.476530 | orchestrator | 2025-07-05 22:22:10.476544 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-05 22:22:10.558706 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:10.558786 | orchestrator | 2025-07-05 22:22:10.558803 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-05 22:22:11.317744 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:11.317788 | orchestrator | 2025-07-05 22:22:11.317795 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-05 22:22:11.986605 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:11.987456 | orchestrator | 2025-07-05 22:22:11.987488 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-05 22:22:13.279171 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-05 22:22:13.279245 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-05 22:22:13.279257 | orchestrator | 2025-07-05 22:22:13.279282 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-05 22:22:14.740296 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:14.740366 | orchestrator | 2025-07-05 22:22:14.740375 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-05 22:22:16.441059 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-05 22:22:16.441242 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-05 22:22:16.441255 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-05 22:22:16.441263 | orchestrator | 2025-07-05 22:22:16.441272 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-05 22:22:16.498114 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:22:16.498166 | orchestrator | 2025-07-05 22:22:16.498174 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-05 22:22:17.036185 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:17.036270 | orchestrator | 2025-07-05 22:22:17.036287 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-05 22:22:17.110221 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:22:17.110275 | orchestrator | 2025-07-05 22:22:17.110281 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-05 22:22:17.963943 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-05 22:22:17.963984 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:17.963992 | orchestrator | 2025-07-05 22:22:17.963999 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-05 22:22:18.000340 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:22:18.000376 | orchestrator | 2025-07-05 22:22:18.000383 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-05 22:22:18.033853 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:22:18.033896 | orchestrator | 2025-07-05 22:22:18.033905 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-05 22:22:18.074172 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:22:18.074215 | orchestrator | 2025-07-05 22:22:18.074224 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-05 22:22:18.129874 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:22:18.129935 | orchestrator | 2025-07-05 22:22:18.129951 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-05 22:22:18.843919 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:18.844005 | orchestrator | 2025-07-05 22:22:18.844022 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-05 22:22:18.844036 | orchestrator | 2025-07-05 22:22:18.844047 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-05 22:22:20.241165 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:20.241252 | orchestrator | 2025-07-05 22:22:20.241271 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-05 22:22:21.202010 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:21.202082 | orchestrator | 2025-07-05 22:22:21.202088 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:22:21.202094 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-05 22:22:21.202099 | orchestrator | 2025-07-05 22:22:21.698018 | orchestrator | ok: Runtime: 0:07:09.644691 2025-07-05 22:22:21.716120 | 2025-07-05 22:22:21.716285 | TASK [Point out that the log in on the manager is now possible] 2025-07-05 22:22:21.751748 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-05 22:22:21.761012 | 2025-07-05 22:22:21.761147 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-05 22:22:21.796585 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-05 22:22:21.808461 | 2025-07-05 22:22:21.808650 | TASK [Run manager part 1 + 2] 2025-07-05 22:22:22.683962 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-05 22:22:22.739614 | orchestrator | 2025-07-05 22:22:22.739724 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-05 22:22:22.739744 | orchestrator | 2025-07-05 22:22:22.739806 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-05 22:22:25.720642 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:25.720843 | orchestrator | 2025-07-05 22:22:25.720907 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-05 22:22:25.759134 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:22:25.759199 | orchestrator | 2025-07-05 22:22:25.759213 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-05 22:22:25.798691 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:25.798751 | orchestrator | 2025-07-05 22:22:25.798774 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-05 22:22:25.839908 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:25.839963 | orchestrator | 2025-07-05 22:22:25.839972 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-05 22:22:25.916487 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:25.916544 | orchestrator | 2025-07-05 22:22:25.916551 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-05 22:22:25.989119 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:25.989177 | orchestrator | 2025-07-05 22:22:25.989185 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-05 22:22:26.027391 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-05 22:22:26.027434 | orchestrator | 2025-07-05 22:22:26.027441 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-05 22:22:26.789911 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:26.789973 | orchestrator | 2025-07-05 22:22:26.789985 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-05 22:22:26.826849 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:22:26.826904 | orchestrator | 2025-07-05 22:22:26.826912 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-05 22:22:28.234300 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:28.234393 | orchestrator | 2025-07-05 22:22:28.234404 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-05 22:22:28.809099 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:28.809163 | orchestrator | 2025-07-05 22:22:28.809177 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-05 22:22:29.957332 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:29.957418 | orchestrator | 2025-07-05 22:22:29.957437 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-05 22:22:43.268989 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:43.270464 | orchestrator | 2025-07-05 22:22:43.270487 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-05 22:22:44.003816 | orchestrator | ok: [testbed-manager] 2025-07-05 22:22:44.003934 | orchestrator | 2025-07-05 22:22:44.003955 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-05 22:22:44.059401 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:22:44.059516 | orchestrator | 2025-07-05 22:22:44.059533 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-05 22:22:45.026100 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:45.026173 | orchestrator | 2025-07-05 22:22:45.026191 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-05 22:22:46.011194 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:46.011288 | orchestrator | 2025-07-05 22:22:46.011305 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-05 22:22:46.594678 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:46.594798 | orchestrator | 2025-07-05 22:22:46.594817 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-05 22:22:46.640144 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-05 22:22:46.640260 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-05 22:22:46.640276 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-05 22:22:46.640288 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-05 22:22:48.661116 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:48.661188 | orchestrator | 2025-07-05 22:22:48.661198 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-05 22:22:57.700283 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-05 22:22:57.700413 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-05 22:22:57.700442 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-05 22:22:57.700461 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-05 22:22:57.700517 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-05 22:22:57.700552 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-05 22:22:57.700573 | orchestrator | 2025-07-05 22:22:57.700595 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-05 22:22:58.758512 | orchestrator | changed: [testbed-manager] 2025-07-05 22:22:58.758606 | orchestrator | 2025-07-05 22:22:58.758623 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-05 22:22:58.806859 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:22:58.806963 | orchestrator | 2025-07-05 22:22:58.806980 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-05 22:23:01.955880 | orchestrator | changed: [testbed-manager] 2025-07-05 22:23:01.955956 | orchestrator | 2025-07-05 22:23:01.955967 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-05 22:23:02.004035 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:23:02.004120 | orchestrator | 2025-07-05 22:23:02.004144 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-05 22:24:37.602684 | orchestrator | changed: [testbed-manager] 2025-07-05 22:24:37.602795 | orchestrator | 2025-07-05 22:24:37.602815 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-05 22:24:38.730502 | orchestrator | ok: [testbed-manager] 2025-07-05 22:24:38.730593 | orchestrator | 2025-07-05 22:24:38.730620 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:24:38.730643 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-05 22:24:38.730661 | orchestrator | 2025-07-05 22:24:38.941091 | orchestrator | ok: Runtime: 0:02:16.701435 2025-07-05 22:24:38.960188 | 2025-07-05 22:24:38.960377 | TASK [Reboot manager] 2025-07-05 22:24:40.507138 | orchestrator | ok: Runtime: 0:00:00.944124 2025-07-05 22:24:40.519605 | 2025-07-05 22:24:40.519755 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-05 22:24:54.974712 | orchestrator | ok 2025-07-05 22:24:54.985071 | 2025-07-05 22:24:54.985197 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-05 22:25:55.023000 | orchestrator | ok 2025-07-05 22:25:55.030409 | 2025-07-05 22:25:55.030527 | TASK [Deploy manager + bootstrap nodes] 2025-07-05 22:25:57.572778 | orchestrator | 2025-07-05 22:25:57.572970 | orchestrator | # DEPLOY MANAGER 2025-07-05 22:25:57.572994 | orchestrator | 2025-07-05 22:25:57.573008 | orchestrator | + set -e 2025-07-05 22:25:57.573021 | orchestrator | + echo 2025-07-05 22:25:57.573034 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-05 22:25:57.573051 | orchestrator | + echo 2025-07-05 22:25:57.573129 | orchestrator | + cat /opt/manager-vars.sh 2025-07-05 22:25:57.576402 | orchestrator | export NUMBER_OF_NODES=6 2025-07-05 22:25:57.576455 | orchestrator | 2025-07-05 22:25:57.576468 | orchestrator | export CEPH_VERSION=reef 2025-07-05 22:25:57.576481 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-05 22:25:57.576493 | orchestrator | export MANAGER_VERSION=latest 2025-07-05 22:25:57.576518 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-05 22:25:57.576529 | orchestrator | 2025-07-05 22:25:57.576547 | orchestrator | export ARA=false 2025-07-05 22:25:57.576558 | orchestrator | export DEPLOY_MODE=manager 2025-07-05 22:25:57.576576 | orchestrator | export TEMPEST=false 2025-07-05 22:25:57.576587 | orchestrator | export IS_ZUUL=true 2025-07-05 22:25:57.576598 | orchestrator | 2025-07-05 22:25:57.576617 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 22:25:57.576628 | orchestrator | export EXTERNAL_API=false 2025-07-05 22:25:57.576639 | orchestrator | 2025-07-05 22:25:57.576649 | orchestrator | export IMAGE_USER=ubuntu 2025-07-05 22:25:57.576664 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-05 22:25:57.576674 | orchestrator | 2025-07-05 22:25:57.576685 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-05 22:25:57.576704 | orchestrator | 2025-07-05 22:25:57.576715 | orchestrator | + echo 2025-07-05 22:25:57.576731 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-05 22:25:57.577559 | orchestrator | ++ export INTERACTIVE=false 2025-07-05 22:25:57.577577 | orchestrator | ++ INTERACTIVE=false 2025-07-05 22:25:57.577591 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-05 22:25:57.577604 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-05 22:25:57.577888 | orchestrator | + source /opt/manager-vars.sh 2025-07-05 22:25:57.577904 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-05 22:25:57.577917 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-05 22:25:57.577927 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-05 22:25:57.577938 | orchestrator | ++ CEPH_VERSION=reef 2025-07-05 22:25:57.577948 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-05 22:25:57.577959 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-05 22:25:57.578002 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-05 22:25:57.578014 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-05 22:25:57.578135 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-05 22:25:57.578156 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-05 22:25:57.578168 | orchestrator | ++ export ARA=false 2025-07-05 22:25:57.578178 | orchestrator | ++ ARA=false 2025-07-05 22:25:57.578189 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-05 22:25:57.578200 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-05 22:25:57.578210 | orchestrator | ++ export TEMPEST=false 2025-07-05 22:25:57.578221 | orchestrator | ++ TEMPEST=false 2025-07-05 22:25:57.578231 | orchestrator | ++ export IS_ZUUL=true 2025-07-05 22:25:57.578242 | orchestrator | ++ IS_ZUUL=true 2025-07-05 22:25:57.578258 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 22:25:57.578269 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 22:25:57.578280 | orchestrator | ++ export EXTERNAL_API=false 2025-07-05 22:25:57.578291 | orchestrator | ++ EXTERNAL_API=false 2025-07-05 22:25:57.578301 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-05 22:25:57.578312 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-05 22:25:57.578323 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-05 22:25:57.578334 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-05 22:25:57.578345 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-05 22:25:57.578355 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-05 22:25:57.578366 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-05 22:25:57.635445 | orchestrator | + docker version 2025-07-05 22:25:57.898119 | orchestrator | Client: Docker Engine - Community 2025-07-05 22:25:57.898216 | orchestrator | Version: 27.5.1 2025-07-05 22:25:57.898227 | orchestrator | API version: 1.47 2025-07-05 22:25:57.898232 | orchestrator | Go version: go1.22.11 2025-07-05 22:25:57.898237 | orchestrator | Git commit: 9f9e405 2025-07-05 22:25:57.898242 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-05 22:25:57.898248 | orchestrator | OS/Arch: linux/amd64 2025-07-05 22:25:57.898253 | orchestrator | Context: default 2025-07-05 22:25:57.898257 | orchestrator | 2025-07-05 22:25:57.898262 | orchestrator | Server: Docker Engine - Community 2025-07-05 22:25:57.898267 | orchestrator | Engine: 2025-07-05 22:25:57.898272 | orchestrator | Version: 27.5.1 2025-07-05 22:25:57.898277 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-05 22:25:57.898305 | orchestrator | Go version: go1.22.11 2025-07-05 22:25:57.898310 | orchestrator | Git commit: 4c9b3b0 2025-07-05 22:25:57.898315 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-05 22:25:57.898320 | orchestrator | OS/Arch: linux/amd64 2025-07-05 22:25:57.898325 | orchestrator | Experimental: false 2025-07-05 22:25:57.898330 | orchestrator | containerd: 2025-07-05 22:25:57.898335 | orchestrator | Version: 1.7.27 2025-07-05 22:25:57.898341 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-05 22:25:57.898346 | orchestrator | runc: 2025-07-05 22:25:57.898361 | orchestrator | Version: 1.2.5 2025-07-05 22:25:57.898367 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-05 22:25:57.898372 | orchestrator | docker-init: 2025-07-05 22:25:57.898377 | orchestrator | Version: 0.19.0 2025-07-05 22:25:57.898382 | orchestrator | GitCommit: de40ad0 2025-07-05 22:25:57.901928 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-05 22:25:57.911658 | orchestrator | + set -e 2025-07-05 22:25:57.911749 | orchestrator | + source /opt/manager-vars.sh 2025-07-05 22:25:57.911763 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-05 22:25:57.911774 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-05 22:25:57.911785 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-05 22:25:57.911796 | orchestrator | ++ CEPH_VERSION=reef 2025-07-05 22:25:57.911807 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-05 22:25:57.911866 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-05 22:25:57.911886 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-05 22:25:57.911902 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-05 22:25:57.911914 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-05 22:25:57.911924 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-05 22:25:57.911935 | orchestrator | ++ export ARA=false 2025-07-05 22:25:57.911946 | orchestrator | ++ ARA=false 2025-07-05 22:25:57.911958 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-05 22:25:57.911968 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-05 22:25:57.911979 | orchestrator | ++ export TEMPEST=false 2025-07-05 22:25:57.911989 | orchestrator | ++ TEMPEST=false 2025-07-05 22:25:57.912000 | orchestrator | ++ export IS_ZUUL=true 2025-07-05 22:25:57.912010 | orchestrator | ++ IS_ZUUL=true 2025-07-05 22:25:57.912020 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 22:25:57.912031 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 22:25:57.912041 | orchestrator | ++ export EXTERNAL_API=false 2025-07-05 22:25:57.912052 | orchestrator | ++ EXTERNAL_API=false 2025-07-05 22:25:57.912062 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-05 22:25:57.912073 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-05 22:25:57.912084 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-05 22:25:57.912094 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-05 22:25:57.912136 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-05 22:25:57.912148 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-05 22:25:57.912158 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-05 22:25:57.912179 | orchestrator | ++ export INTERACTIVE=false 2025-07-05 22:25:57.912190 | orchestrator | ++ INTERACTIVE=false 2025-07-05 22:25:57.912201 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-05 22:25:57.912216 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-05 22:25:57.912227 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-05 22:25:57.912238 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-05 22:25:57.912249 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-07-05 22:25:57.919244 | orchestrator | + set -e 2025-07-05 22:25:57.919308 | orchestrator | + VERSION=reef 2025-07-05 22:25:57.920555 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-05 22:25:57.927049 | orchestrator | + [[ -n ceph_version: reef ]] 2025-07-05 22:25:57.927073 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-07-05 22:25:57.932573 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-07-05 22:25:57.939479 | orchestrator | + set -e 2025-07-05 22:25:57.939512 | orchestrator | + VERSION=2024.2 2025-07-05 22:25:57.940809 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-05 22:25:57.944501 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-07-05 22:25:57.944524 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-07-05 22:25:57.950004 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-05 22:25:57.950804 | orchestrator | ++ semver latest 7.0.0 2025-07-05 22:25:58.011949 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-05 22:25:58.012042 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-05 22:25:58.012055 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-05 22:25:58.012068 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-05 22:25:58.107860 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-05 22:25:58.109179 | orchestrator | + source /opt/venv/bin/activate 2025-07-05 22:25:58.110443 | orchestrator | ++ deactivate nondestructive 2025-07-05 22:25:58.110545 | orchestrator | ++ '[' -n '' ']' 2025-07-05 22:25:58.110560 | orchestrator | ++ '[' -n '' ']' 2025-07-05 22:25:58.110578 | orchestrator | ++ hash -r 2025-07-05 22:25:58.110589 | orchestrator | ++ '[' -n '' ']' 2025-07-05 22:25:58.110600 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-05 22:25:58.110617 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-05 22:25:58.110628 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-05 22:25:58.110652 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-05 22:25:58.110666 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-05 22:25:58.110678 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-05 22:25:58.110689 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-05 22:25:58.110701 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-05 22:25:58.110718 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-05 22:25:58.110729 | orchestrator | ++ export PATH 2025-07-05 22:25:58.110739 | orchestrator | ++ '[' -n '' ']' 2025-07-05 22:25:58.110750 | orchestrator | ++ '[' -z '' ']' 2025-07-05 22:25:58.110771 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-05 22:25:58.110783 | orchestrator | ++ PS1='(venv) ' 2025-07-05 22:25:58.110793 | orchestrator | ++ export PS1 2025-07-05 22:25:58.110804 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-05 22:25:58.110814 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-05 22:25:58.110825 | orchestrator | ++ hash -r 2025-07-05 22:25:58.110965 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-05 22:25:59.339958 | orchestrator | 2025-07-05 22:25:59.340076 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-05 22:25:59.340093 | orchestrator | 2025-07-05 22:25:59.340143 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-05 22:25:59.913309 | orchestrator | ok: [testbed-manager] 2025-07-05 22:25:59.913439 | orchestrator | 2025-07-05 22:25:59.913457 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-05 22:26:00.875407 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:00.875513 | orchestrator | 2025-07-05 22:26:00.875527 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-05 22:26:00.875539 | orchestrator | 2025-07-05 22:26:00.875550 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-05 22:26:03.282997 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:03.283177 | orchestrator | 2025-07-05 22:26:03.283200 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-05 22:26:03.336895 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:03.337021 | orchestrator | 2025-07-05 22:26:03.337040 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-05 22:26:03.812258 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:03.812349 | orchestrator | 2025-07-05 22:26:03.812360 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-05 22:26:03.855364 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:26:03.855462 | orchestrator | 2025-07-05 22:26:03.855475 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-05 22:26:04.194202 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:04.194310 | orchestrator | 2025-07-05 22:26:04.194324 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-05 22:26:04.251732 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:26:04.251828 | orchestrator | 2025-07-05 22:26:04.251840 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-05 22:26:04.586990 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:04.587132 | orchestrator | 2025-07-05 22:26:04.587150 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-05 22:26:04.698415 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:26:04.698534 | orchestrator | 2025-07-05 22:26:04.698568 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-05 22:26:04.698581 | orchestrator | 2025-07-05 22:26:04.698606 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-05 22:26:07.507194 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:07.507304 | orchestrator | 2025-07-05 22:26:07.507320 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-05 22:26:07.600669 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-05 22:26:07.600765 | orchestrator | 2025-07-05 22:26:07.600779 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-05 22:26:07.650269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-05 22:26:07.650357 | orchestrator | 2025-07-05 22:26:07.650371 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-05 22:26:08.752025 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-05 22:26:08.752179 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-05 22:26:08.752195 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-05 22:26:08.752207 | orchestrator | 2025-07-05 22:26:08.752219 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-05 22:26:10.595848 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-05 22:26:10.595959 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-05 22:26:10.595976 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-05 22:26:10.595989 | orchestrator | 2025-07-05 22:26:10.596001 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-05 22:26:11.244130 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-05 22:26:11.244253 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:11.244274 | orchestrator | 2025-07-05 22:26:11.244287 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-05 22:26:11.852440 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-05 22:26:11.852517 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:11.852524 | orchestrator | 2025-07-05 22:26:11.852529 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-05 22:26:11.914753 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:26:11.914856 | orchestrator | 2025-07-05 22:26:11.914871 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-05 22:26:12.278876 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:12.278981 | orchestrator | 2025-07-05 22:26:12.278997 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-05 22:26:12.351890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-05 22:26:12.352022 | orchestrator | 2025-07-05 22:26:12.352038 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-05 22:26:13.409438 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:13.409570 | orchestrator | 2025-07-05 22:26:13.409586 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-05 22:26:14.250417 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:14.250590 | orchestrator | 2025-07-05 22:26:14.250621 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-05 22:26:26.198996 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:26.199187 | orchestrator | 2025-07-05 22:26:26.199210 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-05 22:26:26.251677 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:26:26.251790 | orchestrator | 2025-07-05 22:26:26.251807 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-05 22:26:26.251820 | orchestrator | 2025-07-05 22:26:26.251832 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-05 22:26:28.051293 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:28.051419 | orchestrator | 2025-07-05 22:26:28.051466 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-05 22:26:28.154007 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-05 22:26:28.154223 | orchestrator | 2025-07-05 22:26:28.154238 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-05 22:26:28.246946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-05 22:26:28.247114 | orchestrator | 2025-07-05 22:26:28.247130 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-05 22:26:30.908796 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:30.908928 | orchestrator | 2025-07-05 22:26:30.908945 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-05 22:26:30.958141 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:30.958270 | orchestrator | 2025-07-05 22:26:30.958291 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-05 22:26:31.087978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-05 22:26:31.088143 | orchestrator | 2025-07-05 22:26:31.088160 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-05 22:26:33.940728 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-05 22:26:33.940880 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-05 22:26:33.940898 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-05 22:26:33.940911 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-05 22:26:33.940922 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-05 22:26:33.940933 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-05 22:26:33.940944 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-05 22:26:33.940954 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-05 22:26:33.940966 | orchestrator | 2025-07-05 22:26:33.940977 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-05 22:26:34.579206 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:34.579310 | orchestrator | 2025-07-05 22:26:34.579324 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-05 22:26:35.235439 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:35.235545 | orchestrator | 2025-07-05 22:26:35.235560 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-05 22:26:35.315682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-05 22:26:35.315787 | orchestrator | 2025-07-05 22:26:35.315802 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-05 22:26:36.535479 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-05 22:26:36.535587 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-05 22:26:36.535601 | orchestrator | 2025-07-05 22:26:36.535613 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-05 22:26:37.143125 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:37.143228 | orchestrator | 2025-07-05 22:26:37.143242 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-05 22:26:37.204883 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:26:37.204985 | orchestrator | 2025-07-05 22:26:37.205039 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-05 22:26:37.270139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-05 22:26:37.270231 | orchestrator | 2025-07-05 22:26:37.270245 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-05 22:26:38.630618 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-05 22:26:38.630705 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-05 22:26:38.630713 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:38.630719 | orchestrator | 2025-07-05 22:26:38.630725 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-05 22:26:39.272338 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:39.272449 | orchestrator | 2025-07-05 22:26:39.272465 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-05 22:26:39.336401 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:26:39.336533 | orchestrator | 2025-07-05 22:26:39.336549 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-05 22:26:39.435104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-05 22:26:39.435207 | orchestrator | 2025-07-05 22:26:39.435221 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-05 22:26:39.962154 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:39.962267 | orchestrator | 2025-07-05 22:26:39.962285 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-05 22:26:40.383620 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:40.383724 | orchestrator | 2025-07-05 22:26:40.383739 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-05 22:26:41.637722 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-05 22:26:41.637854 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-05 22:26:41.637871 | orchestrator | 2025-07-05 22:26:41.637884 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-05 22:26:42.307131 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:42.307264 | orchestrator | 2025-07-05 22:26:42.307292 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-05 22:26:42.684426 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:42.684503 | orchestrator | 2025-07-05 22:26:42.684510 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-05 22:26:43.030200 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:43.030308 | orchestrator | 2025-07-05 22:26:43.030323 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-05 22:26:43.083174 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:26:43.083244 | orchestrator | 2025-07-05 22:26:43.083260 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-05 22:26:43.159215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-05 22:26:43.159313 | orchestrator | 2025-07-05 22:26:43.159327 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-05 22:26:43.203443 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:43.203540 | orchestrator | 2025-07-05 22:26:43.203554 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-05 22:26:45.270206 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-05 22:26:45.270303 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-05 22:26:45.270313 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-05 22:26:45.270319 | orchestrator | 2025-07-05 22:26:45.270325 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-05 22:26:46.034159 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:46.034271 | orchestrator | 2025-07-05 22:26:46.034288 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-05 22:26:46.749801 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:46.749905 | orchestrator | 2025-07-05 22:26:46.749920 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-05 22:26:47.469841 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:47.469924 | orchestrator | 2025-07-05 22:26:47.469933 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-05 22:26:47.553606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-05 22:26:47.553719 | orchestrator | 2025-07-05 22:26:47.553737 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-05 22:26:47.599651 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:47.599746 | orchestrator | 2025-07-05 22:26:47.599759 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-05 22:26:48.329413 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-05 22:26:48.329494 | orchestrator | 2025-07-05 22:26:48.329503 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-05 22:26:48.415531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-05 22:26:48.415627 | orchestrator | 2025-07-05 22:26:48.415640 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-05 22:26:49.117707 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:49.117814 | orchestrator | 2025-07-05 22:26:49.117830 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-05 22:26:49.749912 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:49.750099 | orchestrator | 2025-07-05 22:26:49.750117 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-05 22:26:49.804328 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:26:49.804411 | orchestrator | 2025-07-05 22:26:49.804423 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-05 22:26:49.866171 | orchestrator | ok: [testbed-manager] 2025-07-05 22:26:49.866280 | orchestrator | 2025-07-05 22:26:49.866296 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-05 22:26:50.695891 | orchestrator | changed: [testbed-manager] 2025-07-05 22:26:50.696030 | orchestrator | 2025-07-05 22:26:50.696047 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-05 22:28:00.098112 | orchestrator | changed: [testbed-manager] 2025-07-05 22:28:00.098213 | orchestrator | 2025-07-05 22:28:00.098223 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-05 22:28:01.105879 | orchestrator | ok: [testbed-manager] 2025-07-05 22:28:01.105986 | orchestrator | 2025-07-05 22:28:01.106002 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-05 22:28:01.164392 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:28:01.164483 | orchestrator | 2025-07-05 22:28:01.164498 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-05 22:28:04.008120 | orchestrator | changed: [testbed-manager] 2025-07-05 22:28:04.008257 | orchestrator | 2025-07-05 22:28:04.008279 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-05 22:28:04.077903 | orchestrator | ok: [testbed-manager] 2025-07-05 22:28:04.078013 | orchestrator | 2025-07-05 22:28:04.078084 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-05 22:28:04.078098 | orchestrator | 2025-07-05 22:28:04.078109 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-05 22:28:04.131538 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:28:04.131636 | orchestrator | 2025-07-05 22:28:04.131649 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-05 22:29:04.187283 | orchestrator | Pausing for 60 seconds 2025-07-05 22:29:04.187410 | orchestrator | changed: [testbed-manager] 2025-07-05 22:29:04.187427 | orchestrator | 2025-07-05 22:29:04.187440 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-05 22:29:08.234407 | orchestrator | changed: [testbed-manager] 2025-07-05 22:29:08.234590 | orchestrator | 2025-07-05 22:29:08.234684 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-05 22:29:49.949931 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-05 22:29:49.950112 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-05 22:29:49.950131 | orchestrator | changed: [testbed-manager] 2025-07-05 22:29:49.950182 | orchestrator | 2025-07-05 22:29:49.950195 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-05 22:29:59.217494 | orchestrator | changed: [testbed-manager] 2025-07-05 22:29:59.217704 | orchestrator | 2025-07-05 22:29:59.217734 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-05 22:29:59.309122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-05 22:29:59.309239 | orchestrator | 2025-07-05 22:29:59.309249 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-05 22:29:59.309258 | orchestrator | 2025-07-05 22:29:59.309266 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-05 22:29:59.360166 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:29:59.360269 | orchestrator | 2025-07-05 22:29:59.360282 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:29:59.360295 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-05 22:29:59.360306 | orchestrator | 2025-07-05 22:29:59.453104 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-05 22:29:59.453203 | orchestrator | + deactivate 2025-07-05 22:29:59.453217 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-05 22:29:59.453231 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-05 22:29:59.453241 | orchestrator | + export PATH 2025-07-05 22:29:59.453252 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-05 22:29:59.453263 | orchestrator | + '[' -n '' ']' 2025-07-05 22:29:59.453274 | orchestrator | + hash -r 2025-07-05 22:29:59.453285 | orchestrator | + '[' -n '' ']' 2025-07-05 22:29:59.453296 | orchestrator | + unset VIRTUAL_ENV 2025-07-05 22:29:59.453306 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-05 22:29:59.453342 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-05 22:29:59.453618 | orchestrator | + unset -f deactivate 2025-07-05 22:29:59.453723 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-05 22:29:59.461695 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-05 22:29:59.461779 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-05 22:29:59.461793 | orchestrator | + local max_attempts=60 2025-07-05 22:29:59.461805 | orchestrator | + local name=ceph-ansible 2025-07-05 22:29:59.461816 | orchestrator | + local attempt_num=1 2025-07-05 22:29:59.462707 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:29:59.495360 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:29:59.495461 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-05 22:29:59.495475 | orchestrator | + local max_attempts=60 2025-07-05 22:29:59.495487 | orchestrator | + local name=kolla-ansible 2025-07-05 22:29:59.495499 | orchestrator | + local attempt_num=1 2025-07-05 22:29:59.495776 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-05 22:29:59.527021 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:29:59.527112 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-05 22:29:59.527126 | orchestrator | + local max_attempts=60 2025-07-05 22:29:59.527137 | orchestrator | + local name=osism-ansible 2025-07-05 22:29:59.527148 | orchestrator | + local attempt_num=1 2025-07-05 22:29:59.527425 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-05 22:29:59.562151 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:29:59.562237 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-05 22:29:59.562249 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-05 22:30:00.287792 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-05 22:30:00.530350 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-05 22:30:00.530447 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-05 22:30:00.530463 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-05 22:30:00.530476 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-05 22:30:00.530489 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-05 22:30:00.530607 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-05 22:30:00.530622 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-05 22:30:00.530633 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-07-05 22:30:00.530644 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-05 22:30:00.530655 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-05 22:30:00.530666 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-05 22:30:00.530677 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-05 22:30:00.530687 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-05 22:30:00.530698 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-05 22:30:00.530724 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-05 22:30:00.538191 | orchestrator | ++ semver latest 7.0.0 2025-07-05 22:30:00.595865 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-05 22:30:00.595957 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-05 22:30:00.595972 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-05 22:30:00.600331 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-05 22:30:12.755041 | orchestrator | 2025-07-05 22:30:12 | INFO  | Task c3bbe387-51f2-48a5-9332-5b1915e35982 (resolvconf) was prepared for execution. 2025-07-05 22:30:12.755161 | orchestrator | 2025-07-05 22:30:12 | INFO  | It takes a moment until task c3bbe387-51f2-48a5-9332-5b1915e35982 (resolvconf) has been started and output is visible here. 2025-07-05 22:30:26.517039 | orchestrator | 2025-07-05 22:30:26.517157 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-05 22:30:26.517175 | orchestrator | 2025-07-05 22:30:26.517190 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-05 22:30:26.517203 | orchestrator | Saturday 05 July 2025 22:30:16 +0000 (0:00:00.162) 0:00:00.162 ********* 2025-07-05 22:30:26.517215 | orchestrator | ok: [testbed-manager] 2025-07-05 22:30:26.517228 | orchestrator | 2025-07-05 22:30:26.517240 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-05 22:30:26.517253 | orchestrator | Saturday 05 July 2025 22:30:20 +0000 (0:00:03.766) 0:00:03.928 ********* 2025-07-05 22:30:26.517265 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:30:26.517277 | orchestrator | 2025-07-05 22:30:26.517295 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-05 22:30:26.517308 | orchestrator | Saturday 05 July 2025 22:30:20 +0000 (0:00:00.077) 0:00:04.006 ********* 2025-07-05 22:30:26.517343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-05 22:30:26.517357 | orchestrator | 2025-07-05 22:30:26.517370 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-05 22:30:26.517381 | orchestrator | Saturday 05 July 2025 22:30:20 +0000 (0:00:00.080) 0:00:04.086 ********* 2025-07-05 22:30:26.517393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-05 22:30:26.517405 | orchestrator | 2025-07-05 22:30:26.517417 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-05 22:30:26.517429 | orchestrator | Saturday 05 July 2025 22:30:20 +0000 (0:00:00.088) 0:00:04.175 ********* 2025-07-05 22:30:26.517440 | orchestrator | ok: [testbed-manager] 2025-07-05 22:30:26.517513 | orchestrator | 2025-07-05 22:30:26.517527 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-05 22:30:26.517538 | orchestrator | Saturday 05 July 2025 22:30:21 +0000 (0:00:01.087) 0:00:05.262 ********* 2025-07-05 22:30:26.517549 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:30:26.517560 | orchestrator | 2025-07-05 22:30:26.517571 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-05 22:30:26.517581 | orchestrator | Saturday 05 July 2025 22:30:21 +0000 (0:00:00.077) 0:00:05.339 ********* 2025-07-05 22:30:26.517592 | orchestrator | ok: [testbed-manager] 2025-07-05 22:30:26.517603 | orchestrator | 2025-07-05 22:30:26.517614 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-05 22:30:26.517624 | orchestrator | Saturday 05 July 2025 22:30:22 +0000 (0:00:00.494) 0:00:05.834 ********* 2025-07-05 22:30:26.517635 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:30:26.517646 | orchestrator | 2025-07-05 22:30:26.517657 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-05 22:30:26.517669 | orchestrator | Saturday 05 July 2025 22:30:22 +0000 (0:00:00.083) 0:00:05.917 ********* 2025-07-05 22:30:26.517680 | orchestrator | changed: [testbed-manager] 2025-07-05 22:30:26.517691 | orchestrator | 2025-07-05 22:30:26.517701 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-05 22:30:26.517712 | orchestrator | Saturday 05 July 2025 22:30:23 +0000 (0:00:00.515) 0:00:06.432 ********* 2025-07-05 22:30:26.517723 | orchestrator | changed: [testbed-manager] 2025-07-05 22:30:26.517734 | orchestrator | 2025-07-05 22:30:26.517744 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-05 22:30:26.517755 | orchestrator | Saturday 05 July 2025 22:30:24 +0000 (0:00:01.058) 0:00:07.491 ********* 2025-07-05 22:30:26.517765 | orchestrator | ok: [testbed-manager] 2025-07-05 22:30:26.517776 | orchestrator | 2025-07-05 22:30:26.517787 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-05 22:30:26.517798 | orchestrator | Saturday 05 July 2025 22:30:25 +0000 (0:00:00.951) 0:00:08.442 ********* 2025-07-05 22:30:26.517809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-05 22:30:26.517819 | orchestrator | 2025-07-05 22:30:26.517840 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-05 22:30:26.517851 | orchestrator | Saturday 05 July 2025 22:30:25 +0000 (0:00:00.086) 0:00:08.529 ********* 2025-07-05 22:30:26.517862 | orchestrator | changed: [testbed-manager] 2025-07-05 22:30:26.517872 | orchestrator | 2025-07-05 22:30:26.517883 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:30:26.517895 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-05 22:30:26.517906 | orchestrator | 2025-07-05 22:30:26.517917 | orchestrator | 2025-07-05 22:30:26.517927 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:30:26.517947 | orchestrator | Saturday 05 July 2025 22:30:26 +0000 (0:00:01.125) 0:00:09.654 ********* 2025-07-05 22:30:26.517958 | orchestrator | =============================================================================== 2025-07-05 22:30:26.517969 | orchestrator | Gathering Facts --------------------------------------------------------- 3.77s 2025-07-05 22:30:26.517979 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2025-07-05 22:30:26.517990 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.09s 2025-07-05 22:30:26.518001 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.06s 2025-07-05 22:30:26.518012 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2025-07-05 22:30:26.518083 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-07-05 22:30:26.518117 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-07-05 22:30:26.518128 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-07-05 22:30:26.518139 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-07-05 22:30:26.518150 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-07-05 22:30:26.518160 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-07-05 22:30:26.518171 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2025-07-05 22:30:26.518181 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.08s 2025-07-05 22:30:26.802309 | orchestrator | + osism apply sshconfig 2025-07-05 22:30:38.756688 | orchestrator | 2025-07-05 22:30:38 | INFO  | Task d00c7623-6de3-43a4-8d73-ebf59aea5f87 (sshconfig) was prepared for execution. 2025-07-05 22:30:38.756766 | orchestrator | 2025-07-05 22:30:38 | INFO  | It takes a moment until task d00c7623-6de3-43a4-8d73-ebf59aea5f87 (sshconfig) has been started and output is visible here. 2025-07-05 22:30:50.565135 | orchestrator | 2025-07-05 22:30:50.565256 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-05 22:30:50.565272 | orchestrator | 2025-07-05 22:30:50.565284 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-05 22:30:50.565296 | orchestrator | Saturday 05 July 2025 22:30:42 +0000 (0:00:00.163) 0:00:00.163 ********* 2025-07-05 22:30:50.565307 | orchestrator | ok: [testbed-manager] 2025-07-05 22:30:50.565318 | orchestrator | 2025-07-05 22:30:50.565329 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-05 22:30:50.565340 | orchestrator | Saturday 05 July 2025 22:30:43 +0000 (0:00:00.559) 0:00:00.722 ********* 2025-07-05 22:30:50.565351 | orchestrator | changed: [testbed-manager] 2025-07-05 22:30:50.565363 | orchestrator | 2025-07-05 22:30:50.565374 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-05 22:30:50.565385 | orchestrator | Saturday 05 July 2025 22:30:43 +0000 (0:00:00.510) 0:00:01.232 ********* 2025-07-05 22:30:50.565396 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-05 22:30:50.565461 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-05 22:30:50.565475 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-05 22:30:50.565486 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-05 22:30:50.565497 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-05 22:30:50.565508 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-05 22:30:50.565540 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-05 22:30:50.565552 | orchestrator | 2025-07-05 22:30:50.565563 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-05 22:30:50.565574 | orchestrator | Saturday 05 July 2025 22:30:49 +0000 (0:00:05.892) 0:00:07.124 ********* 2025-07-05 22:30:50.565608 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:30:50.565620 | orchestrator | 2025-07-05 22:30:50.565630 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-05 22:30:50.565641 | orchestrator | Saturday 05 July 2025 22:30:49 +0000 (0:00:00.070) 0:00:07.194 ********* 2025-07-05 22:30:50.565652 | orchestrator | changed: [testbed-manager] 2025-07-05 22:30:50.565663 | orchestrator | 2025-07-05 22:30:50.565675 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:30:50.565690 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:30:50.565703 | orchestrator | 2025-07-05 22:30:50.565716 | orchestrator | 2025-07-05 22:30:50.565728 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:30:50.565741 | orchestrator | Saturday 05 July 2025 22:30:50 +0000 (0:00:00.586) 0:00:07.781 ********* 2025-07-05 22:30:50.565754 | orchestrator | =============================================================================== 2025-07-05 22:30:50.565766 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.89s 2025-07-05 22:30:50.565779 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-07-05 22:30:50.565791 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-07-05 22:30:50.565803 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2025-07-05 22:30:50.565816 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-07-05 22:30:50.831343 | orchestrator | + osism apply known-hosts 2025-07-05 22:31:02.716007 | orchestrator | 2025-07-05 22:31:02 | INFO  | Task caef88f0-3e6a-4605-aea2-b55ed1d1b7a4 (known-hosts) was prepared for execution. 2025-07-05 22:31:02.716164 | orchestrator | 2025-07-05 22:31:02 | INFO  | It takes a moment until task caef88f0-3e6a-4605-aea2-b55ed1d1b7a4 (known-hosts) has been started and output is visible here. 2025-07-05 22:31:20.526786 | orchestrator | 2025-07-05 22:31:20.526917 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-05 22:31:20.526936 | orchestrator | 2025-07-05 22:31:20.526949 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-05 22:31:20.526961 | orchestrator | Saturday 05 July 2025 22:31:06 +0000 (0:00:00.165) 0:00:00.165 ********* 2025-07-05 22:31:20.526973 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-05 22:31:20.526985 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-05 22:31:20.526995 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-05 22:31:20.527006 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-05 22:31:20.527017 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-05 22:31:20.527028 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-05 22:31:20.527039 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-05 22:31:20.527050 | orchestrator | 2025-07-05 22:31:20.527061 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-05 22:31:20.527073 | orchestrator | Saturday 05 July 2025 22:31:12 +0000 (0:00:06.106) 0:00:06.272 ********* 2025-07-05 22:31:20.527086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-05 22:31:20.527099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-05 22:31:20.527110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-05 22:31:20.527121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-05 22:31:20.527154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-05 22:31:20.527178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-05 22:31:20.527190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-05 22:31:20.527200 | orchestrator | 2025-07-05 22:31:20.527214 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:20.527226 | orchestrator | Saturday 05 July 2025 22:31:12 +0000 (0:00:00.160) 0:00:06.433 ********* 2025-07-05 22:31:20.527239 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGM4SxCESnupjPMSzaJ0Owhmg/zxcJ505twygb2Tw57j) 2025-07-05 22:31:20.527257 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWfbAI1WSwvIzpR3E6vFGaY2dEuDuRUlCbuWZmKbwiz8DVNzzjR4BZ4Sv+Nf7Di+SZt7BlwcdX3qzyBJk4pstJuAmfvMuOYYSfca4/8oJIapArpiCbwB2XF0S/gTJeB23TxQFeH+b2WW3likefIkqiaWDFW/heLZGTvgXJamHZb4WKFPcI2N5RtonmRUszOfOfv+5SKK8fy/GFQF+2Hk0oFzVX4kGBSjWDgaDyhOJdFV6Q+zT0nUSHFBmTjyd9JqLcHcTu9doRiEQ3POI3obzZ+0coWJXhqJVa9nu1sl0xFuVat61d63lmbuZ6rzfDQMlwnfpzMHvb7CFyVMUgEIEs20Ey9JxpNTyGbVfzFQP95SHY5nYVI3zAfyq6PgrijnAD+l8eoGMpa6Wy22nhVk7hB7Ue+Ri2WOqPlU/WKo565kHZT1R+jMsW4q9f1explcMOm+qySYjBLJTJoNQ/wHdd+hRx7zGJhXWSqTrSh8mki/e8la/V5uenep+u02nqM2c=) 2025-07-05 22:31:20.527274 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAskeVawBw+QlT6wZni9c5FTLeVd1nuErQgWv7Eg1jFMv5vEjSLe0tmBGYRj7CK0TjzW3EFbORWaV6jag7uqvxs=) 2025-07-05 22:31:20.527289 | orchestrator | 2025-07-05 22:31:20.527301 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:20.527314 | orchestrator | Saturday 05 July 2025 22:31:15 +0000 (0:00:02.202) 0:00:08.636 ********* 2025-07-05 22:31:20.527348 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzgSHRm10TdcJqU3RSMAzg3LpHzgBAaTb4wbP0ZfH5CpQIUK/SRpROIWb5CIt2clYXFogaHyCDrOUMOtK4uF+MsCsnZ1n2Q5ZDeyPfnSbUAJaEUpQb/j1kyQ8BTM8QcTiM8neWUEA74j83ZhNXqRqOuSH7ZykbD1dmGMm4+bjjyDzIOT8tzp5WPa8bbCnOjtSzbNpy0iOJ07iEx/TMHEXrsk/L5P1bERy4Bwha5xxacpWW7p5e2Ef0WwdlABnkjic56F04TeQgAE2v09HKxpB8S5k5IPvV2FlycZWEKLqbwB7zDZs1IKgzxVtI4bAt5PXDktYeOyk1oNzNb7iaiacZe1kIa1sOlyohETH6As5N0kBwktIQOENog/DtmInNucPFG3uYTNuZiKVJzj7JjX0IpFapM3dYp8OBIHWIrRTXs+SWUCuy9zr5/Oyzis5/wCKThnkHeX1/2T9mT/pyxtPAyoTPh5XSR6M9LbFhy94AKEVUS2Y6JxyLKWywcVAdYu0=) 2025-07-05 22:31:20.527399 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPsJ9A2/lAVK1szWLqEHv2Az/eY3AlSZ6RrChSn8YDxp) 2025-07-05 22:31:20.527413 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIGlaDHORNypz2XyhudlhdfRpkFxyO7V2wrUjXNin9iOPQesYip00z4Afeny5uL2s9j5vpVgKR6MDhez3A4tI9g=) 2025-07-05 22:31:20.527425 | orchestrator | 2025-07-05 22:31:20.527438 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:20.527450 | orchestrator | Saturday 05 July 2025 22:31:16 +0000 (0:00:01.077) 0:00:09.713 ********* 2025-07-05 22:31:20.527463 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhK3irOnlJUqxtIlhlDpu/V5EWqTYqEkwKLNPInmntK3fP9h6ny6a7w8WJzB16jZW9AMWudzQBZkFhmKiNXFLAy1xOqDuD+QsXUYiou5uDGEH/FFU1TJdYuYwrqrC1ypGOVPaQnAcJrMTxTswo39i38BwrDC/kM7r/wmc161WPRZ/s9fvv8FLkenJOXjx42PgvCSJmoxlQ/yay7XbMUZ2C6m2UR9Bv8M7ki8z4FpewMp0T/ignQSAQ7EE4Tk+Xm2yozEeYOWSIwKnOgVo+sZJneusAOGTnlevMYYzf61Rpoti31mDfXTBlfHnoBKqDuxLhk39ZwK5PbKcBHx697lWeQkbC+6mMcoU1PsAQ0xGkzfFEyg5F8F0yRQTlP26oW9gIy8EkufxzF+yh2dWKRv+oyvWyXD32yxTT/DIydMur8EPhnUSX2i9C/N5W/hiXJMRUh8nihXFKzdEGq+sCR4c8Qr5Oo1mncBv3jC+VWRaHeXgQWW9ffq0BnVrqZHSwym0=) 2025-07-05 22:31:20.527486 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEpdD5D5svNNjeiBSLDGw/Ir5AxRy9kJSZvmbaDXHxCcroStxuUp6q3Mld7l5Fhiv96W4SY9QcOws7tTh0ZVgaU=) 2025-07-05 22:31:20.527499 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBbuTjAPdtgR3+rxz4YpLBwy6VX8lzC0ZR5Oo6qJOwsM) 2025-07-05 22:31:20.527511 | orchestrator | 2025-07-05 22:31:20.527524 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:20.527537 | orchestrator | Saturday 05 July 2025 22:31:17 +0000 (0:00:01.086) 0:00:10.799 ********* 2025-07-05 22:31:20.527618 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFmkemR1O31jdOTWektsZ0ANNipg8FauX/CrlwODoMwGrcXUoTH7ANmD5UNwyKbKtCwIAc9Q9E8WgWXseX4JcmA=) 2025-07-05 22:31:20.527631 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU/LrfD9A8phg+W+E/uvq8MYE82flhkG2KiftQmZSMTPOYK4OJIeFwgxLgEZix1eOFXToyfZMaJg+/KNJ3qtjEK+SSf+YOBxclKkwUVjkTJz8O4qyFPgz/sKg84bsKEU+IBpgayOBWB7WJaE/aw6s8GtlgunQcj78ff4uQSONnXbP3xJIKYKyKDg9OAhsjMNQ49o4eXYJiyLrIGZ9BLIku/Z3+kkpb6ZBbgNNgr3BPfhIMGfW6jwrT5W5JNubaEugJ3WRCHQ6Vm+X+IA+JT9VNJUyUsrXDN/teHfLoHtnWCHMF4YrB6OHKr6k1ubGw2io9NFaSjrAOW1VKpHZaSnpGVosnqwBSl8OMKUfQKbBDyHxtqWMy8kdRWO40ni9uTXaYD1xixyKu7GLf2vst7Z5oEb55pORBcDIP4Ude0z9rzZwaUz5EaOfYZpcWSOhd2KJyk7Flyz6wrj1ykuuF2BQnAPZlxLPq+iud5fmdKh5ibEUsX8jmOdBF2ocrM0cAht0=) 2025-07-05 22:31:20.527642 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBmGJKkEi/c7Ult9HSTV5TEj/lMmcUoYSjRpZS3kIB3S) 2025-07-05 22:31:20.527653 | orchestrator | 2025-07-05 22:31:20.527663 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:20.527674 | orchestrator | Saturday 05 July 2025 22:31:18 +0000 (0:00:01.074) 0:00:11.874 ********* 2025-07-05 22:31:20.527685 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwSwy6Nr4yF9JLg32Siy5rTkQ2EoxpIoZJUshwCflGHtudm5cz9kV3tqLq9T6CO4NJYbWZ/1pPg2+93NBT1/qIsAA7o7R69IbySix705QPqKm250BMz5ygpcjtXtNDlJrjIBcZ3qVaJxxdGapFl6AwU4EVHcKt0zvcs3FST4CIjUXiARtpZwDek1kObTQjwVRqVIuz0SPyBFpppr4OxDHn4vD7bTFjCHlo8XdyXy7fVlDx8XiUwzeTNeMUU6mtG+hjF/fssnj++akl0QZhGME5cFw+w0gsh3v0WHCDpMNQoE8xbCZ6mp0OfdaPIYnmvnizzrMmXiSon0WpbYZrOEMegIQv0gjX4Ejcc7GPy8fwTwiE1dS1769IzZQ0z+DUMsaVc2eXToZq5xYISJ3HLqP+Gdm/CsdQ6oa+a5TSuEGiybhmwu+nZNpkNGeLnwrCkk0HxyQdl6/0t2jw+w8UowFuorqo1YQFD5bDol31+kPIExW49JIrPTubhFYWgGXgUnM=) 2025-07-05 22:31:20.527696 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCapNJX7oqtn8Pmzcpauj4VFn++rD07l1xdRsHCMSmJ75PyPj5BHKLPyO83DE+ydt3r13F02pcbB3HZJlm7abNw=) 2025-07-05 22:31:20.527707 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMUF2IbMhYRFBYIvebat7E8A+SVxhLyG3cA8TFM8nPXO) 2025-07-05 22:31:20.527718 | orchestrator | 2025-07-05 22:31:20.527728 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:20.527739 | orchestrator | Saturday 05 July 2025 22:31:19 +0000 (0:00:01.067) 0:00:12.942 ********* 2025-07-05 22:31:20.527758 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINZuhq7uhAhorK5wiQWJdIgzNw7I12MaJNsgrRoFGls/) 2025-07-05 22:31:31.332820 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfLePfCqh83rsNwpb0UWZ0x1yH6oPUgRaUi/l5AsWa9f9Jl84qN5in3l6lYNZtViXZd5rm5U5dvg5CoqPK5TBGY9e2uZkZVClYSv+xRWXwtbHyCMTGQHradGwv4pt8lPCkGfbF0dRKmT4DE3IEMwza3qLd1b1s5aUu6XnKw5/014R9R5iaLgKpXPxz/poJ1lYZ1l+kSCnT2IwbVKhfR8IvrreDo8G+VP6ZrJnZjNK4yfZaoaXx4TbnZiOQ6Y4wsU4M+jkYAwOEeh6TujDc42QqaC9hCmc/6MJlRVN6ysARC2YA3Xp+fBcvHrzSUpKXlYr3wwuQyWy5Lk3gzHjGHBfy5bH8X1h7gk9P1oNcHrq8bLQ35trkAwwT5IpTbcYgh5J5RDfI3ldGlrJ38PzPKc5nZCScG1+e2wCQhEtR6K//5YNcwmUA5BcHPJPYSg7E59ejfAA5eTj4DmV3h9vT9BukHt0y2GQStyD9+MR4Pl46ttgHN32ulAxJEeVgyC3a1E8=) 2025-07-05 22:31:31.333019 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJZnDhu1brZnCS3kMtuUD4+RRi1gjnYd2i5JtLIK5o0M2yQ6v4H4n78z4xJz4zOyxxD5HKRF8SE/Af2dbsXMmCY=) 2025-07-05 22:31:31.333064 | orchestrator | 2025-07-05 22:31:31.333086 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:31.333105 | orchestrator | Saturday 05 July 2025 22:31:20 +0000 (0:00:01.071) 0:00:14.014 ********* 2025-07-05 22:31:31.333123 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA/3oJFoNUbBkAIbs8MqeCDjFE4RArvqvkVZ5Jjy9LZt06spnkndJQkdWkZ5SWbCT6e9+6PjfQYTEL4Wj9BWdn0=) 2025-07-05 22:31:31.333141 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGs+Xkqa7FWWvYURFR6ycRbhlmQbpkW9sNZxM8iDoqxS) 2025-07-05 22:31:31.333160 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvh9ZHqYiBNHZfMeXt1F2/DEuNkFRrV0eZOMa5DYmRMa4XpBymysaA4Rde33JyobUZG/s09S0Hct7mbbOUyhWdrrU4v6cHlv9K2XO8udJuE3tZokBPGag+Ne0AlqOmwruj0TEFy4BCziwa6+itKQtNhNHiJUjWb2ov1lrYaJunmYPN+Y1I+8iTjH6YAuOew0ksHTcp+RIYpuxUlRNNx6f8Jpe6YDFM6ayOi1g/1SE+epJiB7EyEmfkm6O9Zo4NCD3vMf8J3dOWo8YhZVpSbNFxv0w0kwobw+/TvTxA3kL6A3V2QPNUAcnZrJb7hNXNBndB+CtFdAB0q04Q83GZMqQp10AoYsds4BPUGXMvKS7XvGemiLMDr7qOMeqgsyiMUvbs0NoQlbUjSU34QDqgptd+vFPKoiYKhUOdhUR/dPI9H+OjAbNAUrEDULvz8+OayzHianE9E3ZHCzQPp8YMS9g2BvOa88ZTg6uQvDvi5CMgNXdi1mjDOWgg0uq7VQBcrs0=) 2025-07-05 22:31:31.333178 | orchestrator | 2025-07-05 22:31:31.333196 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-07-05 22:31:31.333214 | orchestrator | Saturday 05 July 2025 22:31:21 +0000 (0:00:01.040) 0:00:15.055 ********* 2025-07-05 22:31:31.333253 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-05 22:31:31.333272 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-05 22:31:31.333290 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-05 22:31:31.333307 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-05 22:31:31.333324 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-05 22:31:31.333369 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-05 22:31:31.333390 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-05 22:31:31.333408 | orchestrator | 2025-07-05 22:31:31.333426 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-07-05 22:31:31.333448 | orchestrator | Saturday 05 July 2025 22:31:26 +0000 (0:00:05.275) 0:00:20.330 ********* 2025-07-05 22:31:31.333470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-05 22:31:31.333493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-05 22:31:31.333514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-05 22:31:31.333533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-05 22:31:31.333555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-05 22:31:31.333589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-05 22:31:31.333606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-05 22:31:31.333625 | orchestrator | 2025-07-05 22:31:31.333664 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:31.333681 | orchestrator | Saturday 05 July 2025 22:31:26 +0000 (0:00:00.165) 0:00:20.496 ********* 2025-07-05 22:31:31.333699 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGM4SxCESnupjPMSzaJ0Owhmg/zxcJ505twygb2Tw57j) 2025-07-05 22:31:31.333718 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWfbAI1WSwvIzpR3E6vFGaY2dEuDuRUlCbuWZmKbwiz8DVNzzjR4BZ4Sv+Nf7Di+SZt7BlwcdX3qzyBJk4pstJuAmfvMuOYYSfca4/8oJIapArpiCbwB2XF0S/gTJeB23TxQFeH+b2WW3likefIkqiaWDFW/heLZGTvgXJamHZb4WKFPcI2N5RtonmRUszOfOfv+5SKK8fy/GFQF+2Hk0oFzVX4kGBSjWDgaDyhOJdFV6Q+zT0nUSHFBmTjyd9JqLcHcTu9doRiEQ3POI3obzZ+0coWJXhqJVa9nu1sl0xFuVat61d63lmbuZ6rzfDQMlwnfpzMHvb7CFyVMUgEIEs20Ey9JxpNTyGbVfzFQP95SHY5nYVI3zAfyq6PgrijnAD+l8eoGMpa6Wy22nhVk7hB7Ue+Ri2WOqPlU/WKo565kHZT1R+jMsW4q9f1explcMOm+qySYjBLJTJoNQ/wHdd+hRx7zGJhXWSqTrSh8mki/e8la/V5uenep+u02nqM2c=) 2025-07-05 22:31:31.333737 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAskeVawBw+QlT6wZni9c5FTLeVd1nuErQgWv7Eg1jFMv5vEjSLe0tmBGYRj7CK0TjzW3EFbORWaV6jag7uqvxs=) 2025-07-05 22:31:31.333754 | orchestrator | 2025-07-05 22:31:31.333773 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:31.333791 | orchestrator | Saturday 05 July 2025 22:31:28 +0000 (0:00:01.058) 0:00:21.554 ********* 2025-07-05 22:31:31.333808 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIGlaDHORNypz2XyhudlhdfRpkFxyO7V2wrUjXNin9iOPQesYip00z4Afeny5uL2s9j5vpVgKR6MDhez3A4tI9g=) 2025-07-05 22:31:31.333826 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzgSHRm10TdcJqU3RSMAzg3LpHzgBAaTb4wbP0ZfH5CpQIUK/SRpROIWb5CIt2clYXFogaHyCDrOUMOtK4uF+MsCsnZ1n2Q5ZDeyPfnSbUAJaEUpQb/j1kyQ8BTM8QcTiM8neWUEA74j83ZhNXqRqOuSH7ZykbD1dmGMm4+bjjyDzIOT8tzp5WPa8bbCnOjtSzbNpy0iOJ07iEx/TMHEXrsk/L5P1bERy4Bwha5xxacpWW7p5e2Ef0WwdlABnkjic56F04TeQgAE2v09HKxpB8S5k5IPvV2FlycZWEKLqbwB7zDZs1IKgzxVtI4bAt5PXDktYeOyk1oNzNb7iaiacZe1kIa1sOlyohETH6As5N0kBwktIQOENog/DtmInNucPFG3uYTNuZiKVJzj7JjX0IpFapM3dYp8OBIHWIrRTXs+SWUCuy9zr5/Oyzis5/wCKThnkHeX1/2T9mT/pyxtPAyoTPh5XSR6M9LbFhy94AKEVUS2Y6JxyLKWywcVAdYu0=) 2025-07-05 22:31:31.333844 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPsJ9A2/lAVK1szWLqEHv2Az/eY3AlSZ6RrChSn8YDxp) 2025-07-05 22:31:31.333862 | orchestrator | 2025-07-05 22:31:31.333942 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:31.333963 | orchestrator | Saturday 05 July 2025 22:31:29 +0000 (0:00:01.087) 0:00:22.642 ********* 2025-07-05 22:31:31.333982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEpdD5D5svNNjeiBSLDGw/Ir5AxRy9kJSZvmbaDXHxCcroStxuUp6q3Mld7l5Fhiv96W4SY9QcOws7tTh0ZVgaU=) 2025-07-05 22:31:31.334001 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBbuTjAPdtgR3+rxz4YpLBwy6VX8lzC0ZR5Oo6qJOwsM) 2025-07-05 22:31:31.334100 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhK3irOnlJUqxtIlhlDpu/V5EWqTYqEkwKLNPInmntK3fP9h6ny6a7w8WJzB16jZW9AMWudzQBZkFhmKiNXFLAy1xOqDuD+QsXUYiou5uDGEH/FFU1TJdYuYwrqrC1ypGOVPaQnAcJrMTxTswo39i38BwrDC/kM7r/wmc161WPRZ/s9fvv8FLkenJOXjx42PgvCSJmoxlQ/yay7XbMUZ2C6m2UR9Bv8M7ki8z4FpewMp0T/ignQSAQ7EE4Tk+Xm2yozEeYOWSIwKnOgVo+sZJneusAOGTnlevMYYzf61Rpoti31mDfXTBlfHnoBKqDuxLhk39ZwK5PbKcBHx697lWeQkbC+6mMcoU1PsAQ0xGkzfFEyg5F8F0yRQTlP26oW9gIy8EkufxzF+yh2dWKRv+oyvWyXD32yxTT/DIydMur8EPhnUSX2i9C/N5W/hiXJMRUh8nihXFKzdEGq+sCR4c8Qr5Oo1mncBv3jC+VWRaHeXgQWW9ffq0BnVrqZHSwym0=) 2025-07-05 22:31:31.334134 | orchestrator | 2025-07-05 22:31:31.334151 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:31.334168 | orchestrator | Saturday 05 July 2025 22:31:30 +0000 (0:00:01.080) 0:00:23.723 ********* 2025-07-05 22:31:31.334203 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU/LrfD9A8phg+W+E/uvq8MYE82flhkG2KiftQmZSMTPOYK4OJIeFwgxLgEZix1eOFXToyfZMaJg+/KNJ3qtjEK+SSf+YOBxclKkwUVjkTJz8O4qyFPgz/sKg84bsKEU+IBpgayOBWB7WJaE/aw6s8GtlgunQcj78ff4uQSONnXbP3xJIKYKyKDg9OAhsjMNQ49o4eXYJiyLrIGZ9BLIku/Z3+kkpb6ZBbgNNgr3BPfhIMGfW6jwrT5W5JNubaEugJ3WRCHQ6Vm+X+IA+JT9VNJUyUsrXDN/teHfLoHtnWCHMF4YrB6OHKr6k1ubGw2io9NFaSjrAOW1VKpHZaSnpGVosnqwBSl8OMKUfQKbBDyHxtqWMy8kdRWO40ni9uTXaYD1xixyKu7GLf2vst7Z5oEb55pORBcDIP4Ude0z9rzZwaUz5EaOfYZpcWSOhd2KJyk7Flyz6wrj1ykuuF2BQnAPZlxLPq+iud5fmdKh5ibEUsX8jmOdBF2ocrM0cAht0=) 2025-07-05 22:31:35.607122 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFmkemR1O31jdOTWektsZ0ANNipg8FauX/CrlwODoMwGrcXUoTH7ANmD5UNwyKbKtCwIAc9Q9E8WgWXseX4JcmA=) 2025-07-05 22:31:35.607224 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBmGJKkEi/c7Ult9HSTV5TEj/lMmcUoYSjRpZS3kIB3S) 2025-07-05 22:31:35.607241 | orchestrator | 2025-07-05 22:31:35.607255 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:35.607269 | orchestrator | Saturday 05 July 2025 22:31:31 +0000 (0:00:01.096) 0:00:24.819 ********* 2025-07-05 22:31:35.607281 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMUF2IbMhYRFBYIvebat7E8A+SVxhLyG3cA8TFM8nPXO) 2025-07-05 22:31:35.607296 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwSwy6Nr4yF9JLg32Siy5rTkQ2EoxpIoZJUshwCflGHtudm5cz9kV3tqLq9T6CO4NJYbWZ/1pPg2+93NBT1/qIsAA7o7R69IbySix705QPqKm250BMz5ygpcjtXtNDlJrjIBcZ3qVaJxxdGapFl6AwU4EVHcKt0zvcs3FST4CIjUXiARtpZwDek1kObTQjwVRqVIuz0SPyBFpppr4OxDHn4vD7bTFjCHlo8XdyXy7fVlDx8XiUwzeTNeMUU6mtG+hjF/fssnj++akl0QZhGME5cFw+w0gsh3v0WHCDpMNQoE8xbCZ6mp0OfdaPIYnmvnizzrMmXiSon0WpbYZrOEMegIQv0gjX4Ejcc7GPy8fwTwiE1dS1769IzZQ0z+DUMsaVc2eXToZq5xYISJ3HLqP+Gdm/CsdQ6oa+a5TSuEGiybhmwu+nZNpkNGeLnwrCkk0HxyQdl6/0t2jw+w8UowFuorqo1YQFD5bDol31+kPIExW49JIrPTubhFYWgGXgUnM=) 2025-07-05 22:31:35.607310 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCapNJX7oqtn8Pmzcpauj4VFn++rD07l1xdRsHCMSmJ75PyPj5BHKLPyO83DE+ydt3r13F02pcbB3HZJlm7abNw=) 2025-07-05 22:31:35.607322 | orchestrator | 2025-07-05 22:31:35.607380 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:35.607392 | orchestrator | Saturday 05 July 2025 22:31:32 +0000 (0:00:01.097) 0:00:25.917 ********* 2025-07-05 22:31:35.607404 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfLePfCqh83rsNwpb0UWZ0x1yH6oPUgRaUi/l5AsWa9f9Jl84qN5in3l6lYNZtViXZd5rm5U5dvg5CoqPK5TBGY9e2uZkZVClYSv+xRWXwtbHyCMTGQHradGwv4pt8lPCkGfbF0dRKmT4DE3IEMwza3qLd1b1s5aUu6XnKw5/014R9R5iaLgKpXPxz/poJ1lYZ1l+kSCnT2IwbVKhfR8IvrreDo8G+VP6ZrJnZjNK4yfZaoaXx4TbnZiOQ6Y4wsU4M+jkYAwOEeh6TujDc42QqaC9hCmc/6MJlRVN6ysARC2YA3Xp+fBcvHrzSUpKXlYr3wwuQyWy5Lk3gzHjGHBfy5bH8X1h7gk9P1oNcHrq8bLQ35trkAwwT5IpTbcYgh5J5RDfI3ldGlrJ38PzPKc5nZCScG1+e2wCQhEtR6K//5YNcwmUA5BcHPJPYSg7E59ejfAA5eTj4DmV3h9vT9BukHt0y2GQStyD9+MR4Pl46ttgHN32ulAxJEeVgyC3a1E8=) 2025-07-05 22:31:35.607416 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJZnDhu1brZnCS3kMtuUD4+RRi1gjnYd2i5JtLIK5o0M2yQ6v4H4n78z4xJz4zOyxxD5HKRF8SE/Af2dbsXMmCY=) 2025-07-05 22:31:35.607450 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINZuhq7uhAhorK5wiQWJdIgzNw7I12MaJNsgrRoFGls/) 2025-07-05 22:31:35.607462 | orchestrator | 2025-07-05 22:31:35.607472 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-05 22:31:35.607483 | orchestrator | Saturday 05 July 2025 22:31:33 +0000 (0:00:01.054) 0:00:26.972 ********* 2025-07-05 22:31:35.607494 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA/3oJFoNUbBkAIbs8MqeCDjFE4RArvqvkVZ5Jjy9LZt06spnkndJQkdWkZ5SWbCT6e9+6PjfQYTEL4Wj9BWdn0=) 2025-07-05 22:31:35.607506 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvh9ZHqYiBNHZfMeXt1F2/DEuNkFRrV0eZOMa5DYmRMa4XpBymysaA4Rde33JyobUZG/s09S0Hct7mbbOUyhWdrrU4v6cHlv9K2XO8udJuE3tZokBPGag+Ne0AlqOmwruj0TEFy4BCziwa6+itKQtNhNHiJUjWb2ov1lrYaJunmYPN+Y1I+8iTjH6YAuOew0ksHTcp+RIYpuxUlRNNx6f8Jpe6YDFM6ayOi1g/1SE+epJiB7EyEmfkm6O9Zo4NCD3vMf8J3dOWo8YhZVpSbNFxv0w0kwobw+/TvTxA3kL6A3V2QPNUAcnZrJb7hNXNBndB+CtFdAB0q04Q83GZMqQp10AoYsds4BPUGXMvKS7XvGemiLMDr7qOMeqgsyiMUvbs0NoQlbUjSU34QDqgptd+vFPKoiYKhUOdhUR/dPI9H+OjAbNAUrEDULvz8+OayzHianE9E3ZHCzQPp8YMS9g2BvOa88ZTg6uQvDvi5CMgNXdi1mjDOWgg0uq7VQBcrs0=) 2025-07-05 22:31:35.607518 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGs+Xkqa7FWWvYURFR6ycRbhlmQbpkW9sNZxM8iDoqxS) 2025-07-05 22:31:35.607529 | orchestrator | 2025-07-05 22:31:35.607540 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-07-05 22:31:35.607551 | orchestrator | Saturday 05 July 2025 22:31:34 +0000 (0:00:01.095) 0:00:28.068 ********* 2025-07-05 22:31:35.607563 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-05 22:31:35.607574 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-05 22:31:35.607602 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-05 22:31:35.607613 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-05 22:31:35.607625 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-05 22:31:35.607638 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-05 22:31:35.607650 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-05 22:31:35.607662 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:31:35.607675 | orchestrator | 2025-07-05 22:31:35.607687 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-07-05 22:31:35.607699 | orchestrator | Saturday 05 July 2025 22:31:34 +0000 (0:00:00.165) 0:00:28.233 ********* 2025-07-05 22:31:35.607712 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:31:35.607724 | orchestrator | 2025-07-05 22:31:35.607737 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-07-05 22:31:35.607750 | orchestrator | Saturday 05 July 2025 22:31:34 +0000 (0:00:00.068) 0:00:28.302 ********* 2025-07-05 22:31:35.607770 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:31:35.607783 | orchestrator | 2025-07-05 22:31:35.607795 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-07-05 22:31:35.607807 | orchestrator | Saturday 05 July 2025 22:31:34 +0000 (0:00:00.079) 0:00:28.381 ********* 2025-07-05 22:31:35.607820 | orchestrator | changed: [testbed-manager] 2025-07-05 22:31:35.607832 | orchestrator | 2025-07-05 22:31:35.607845 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:31:35.607858 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-05 22:31:35.607872 | orchestrator | 2025-07-05 22:31:35.607884 | orchestrator | 2025-07-05 22:31:35.607897 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:31:35.607909 | orchestrator | Saturday 05 July 2025 22:31:35 +0000 (0:00:00.476) 0:00:28.858 ********* 2025-07-05 22:31:35.607921 | orchestrator | =============================================================================== 2025-07-05 22:31:35.607942 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.11s 2025-07-05 22:31:35.607954 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.28s 2025-07-05 22:31:35.607966 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.20s 2025-07-05 22:31:35.607980 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-05 22:31:35.607992 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-05 22:31:35.608003 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-05 22:31:35.608013 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-07-05 22:31:35.608024 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-07-05 22:31:35.608035 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-07-05 22:31:35.608046 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-07-05 22:31:35.608056 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-05 22:31:35.608067 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-05 22:31:35.608078 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-05 22:31:35.608088 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-07-05 22:31:35.608099 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-05 22:31:35.608110 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-05 22:31:35.608120 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-07-05 22:31:35.608131 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-07-05 22:31:35.608143 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-07-05 22:31:35.608154 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-07-05 22:31:35.891504 | orchestrator | + osism apply squid 2025-07-05 22:31:47.777262 | orchestrator | 2025-07-05 22:31:47 | INFO  | Task 58fc37b0-aaab-47de-8204-7593441250fd (squid) was prepared for execution. 2025-07-05 22:31:47.777443 | orchestrator | 2025-07-05 22:31:47 | INFO  | It takes a moment until task 58fc37b0-aaab-47de-8204-7593441250fd (squid) has been started and output is visible here. 2025-07-05 22:33:41.782463 | orchestrator | 2025-07-05 22:33:41.782606 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-07-05 22:33:41.782634 | orchestrator | 2025-07-05 22:33:41.782655 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-07-05 22:33:41.782674 | orchestrator | Saturday 05 July 2025 22:31:51 +0000 (0:00:00.164) 0:00:00.164 ********* 2025-07-05 22:33:41.782740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-07-05 22:33:41.782753 | orchestrator | 2025-07-05 22:33:41.782764 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-07-05 22:33:41.782776 | orchestrator | Saturday 05 July 2025 22:31:51 +0000 (0:00:00.082) 0:00:00.247 ********* 2025-07-05 22:33:41.782787 | orchestrator | ok: [testbed-manager] 2025-07-05 22:33:41.782799 | orchestrator | 2025-07-05 22:33:41.782809 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-07-05 22:33:41.782820 | orchestrator | Saturday 05 July 2025 22:31:53 +0000 (0:00:01.403) 0:00:01.650 ********* 2025-07-05 22:33:41.782832 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-07-05 22:33:41.782843 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-07-05 22:33:41.782854 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-07-05 22:33:41.782893 | orchestrator | 2025-07-05 22:33:41.782905 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-07-05 22:33:41.782916 | orchestrator | Saturday 05 July 2025 22:31:54 +0000 (0:00:01.167) 0:00:02.818 ********* 2025-07-05 22:33:41.782927 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-07-05 22:33:41.782937 | orchestrator | 2025-07-05 22:33:41.782948 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-07-05 22:33:41.782959 | orchestrator | Saturday 05 July 2025 22:31:55 +0000 (0:00:01.068) 0:00:03.887 ********* 2025-07-05 22:33:41.782970 | orchestrator | ok: [testbed-manager] 2025-07-05 22:33:41.782980 | orchestrator | 2025-07-05 22:33:41.782993 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-07-05 22:33:41.783006 | orchestrator | Saturday 05 July 2025 22:31:55 +0000 (0:00:00.351) 0:00:04.238 ********* 2025-07-05 22:33:41.783026 | orchestrator | changed: [testbed-manager] 2025-07-05 22:33:41.783044 | orchestrator | 2025-07-05 22:33:41.783062 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-07-05 22:33:41.783081 | orchestrator | Saturday 05 July 2025 22:31:56 +0000 (0:00:00.896) 0:00:05.134 ********* 2025-07-05 22:33:41.783101 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-07-05 22:33:41.783114 | orchestrator | ok: [testbed-manager] 2025-07-05 22:33:41.783126 | orchestrator | 2025-07-05 22:33:41.783138 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-07-05 22:33:41.783151 | orchestrator | Saturday 05 July 2025 22:32:28 +0000 (0:00:31.531) 0:00:36.666 ********* 2025-07-05 22:33:41.783196 | orchestrator | changed: [testbed-manager] 2025-07-05 22:33:41.783209 | orchestrator | 2025-07-05 22:33:41.783221 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-07-05 22:33:41.783234 | orchestrator | Saturday 05 July 2025 22:32:40 +0000 (0:00:12.572) 0:00:49.238 ********* 2025-07-05 22:33:41.783246 | orchestrator | Pausing for 60 seconds 2025-07-05 22:33:41.783259 | orchestrator | changed: [testbed-manager] 2025-07-05 22:33:41.783271 | orchestrator | 2025-07-05 22:33:41.783284 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-07-05 22:33:41.783296 | orchestrator | Saturday 05 July 2025 22:33:40 +0000 (0:01:00.078) 0:01:49.317 ********* 2025-07-05 22:33:41.783308 | orchestrator | ok: [testbed-manager] 2025-07-05 22:33:41.783320 | orchestrator | 2025-07-05 22:33:41.783334 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-07-05 22:33:41.783347 | orchestrator | Saturday 05 July 2025 22:33:40 +0000 (0:00:00.073) 0:01:49.391 ********* 2025-07-05 22:33:41.783358 | orchestrator | changed: [testbed-manager] 2025-07-05 22:33:41.783373 | orchestrator | 2025-07-05 22:33:41.783391 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:33:41.783410 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:33:41.783427 | orchestrator | 2025-07-05 22:33:41.783446 | orchestrator | 2025-07-05 22:33:41.783463 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:33:41.783481 | orchestrator | Saturday 05 July 2025 22:33:41 +0000 (0:00:00.600) 0:01:49.991 ********* 2025-07-05 22:33:41.783499 | orchestrator | =============================================================================== 2025-07-05 22:33:41.783517 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-07-05 22:33:41.783535 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.53s 2025-07-05 22:33:41.783554 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.57s 2025-07-05 22:33:41.783573 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.40s 2025-07-05 22:33:41.783591 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2025-07-05 22:33:41.783609 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2025-07-05 22:33:41.783667 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2025-07-05 22:33:41.783689 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-07-05 22:33:41.783709 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-07-05 22:33:41.783728 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-07-05 22:33:41.783745 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-07-05 22:33:42.038314 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-05 22:33:42.038881 | orchestrator | ++ semver latest 9.0.0 2025-07-05 22:33:42.082735 | orchestrator | + [[ -1 -lt 0 ]] 2025-07-05 22:33:42.082821 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-05 22:33:42.083309 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-07-05 22:33:54.027575 | orchestrator | 2025-07-05 22:33:54 | INFO  | Task 93775047-7234-4e59-904b-a41d8dd8bac7 (operator) was prepared for execution. 2025-07-05 22:33:54.027697 | orchestrator | 2025-07-05 22:33:54 | INFO  | It takes a moment until task 93775047-7234-4e59-904b-a41d8dd8bac7 (operator) has been started and output is visible here. 2025-07-05 22:34:08.998203 | orchestrator | 2025-07-05 22:34:08.998337 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-07-05 22:34:08.998355 | orchestrator | 2025-07-05 22:34:08.998368 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-05 22:34:08.998379 | orchestrator | Saturday 05 July 2025 22:33:57 +0000 (0:00:00.147) 0:00:00.147 ********* 2025-07-05 22:34:08.998391 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:34:08.998403 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:34:08.998414 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:34:08.998441 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:34:08.998453 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:34:08.998464 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:34:08.998475 | orchestrator | 2025-07-05 22:34:08.998486 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-07-05 22:34:08.998497 | orchestrator | Saturday 05 July 2025 22:34:00 +0000 (0:00:02.981) 0:00:03.128 ********* 2025-07-05 22:34:08.998508 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:34:08.998519 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:34:08.998529 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:34:08.998540 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:34:08.998551 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:34:08.998561 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:34:08.998572 | orchestrator | 2025-07-05 22:34:08.998583 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-07-05 22:34:08.998594 | orchestrator | 2025-07-05 22:34:08.998606 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-05 22:34:08.998617 | orchestrator | Saturday 05 July 2025 22:34:01 +0000 (0:00:00.669) 0:00:03.797 ********* 2025-07-05 22:34:08.998628 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:34:08.998641 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:34:08.998654 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:34:08.998666 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:34:08.998679 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:34:08.998692 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:34:08.998705 | orchestrator | 2025-07-05 22:34:08.998718 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-05 22:34:08.998731 | orchestrator | Saturday 05 July 2025 22:34:01 +0000 (0:00:00.155) 0:00:03.953 ********* 2025-07-05 22:34:08.998744 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:34:08.998756 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:34:08.998768 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:34:08.998781 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:34:08.998794 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:34:08.998807 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:34:08.998820 | orchestrator | 2025-07-05 22:34:08.998833 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-05 22:34:08.998866 | orchestrator | Saturday 05 July 2025 22:34:01 +0000 (0:00:00.152) 0:00:04.106 ********* 2025-07-05 22:34:08.998879 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:34:08.998892 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:34:08.998905 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:34:08.998916 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:34:08.998929 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:34:08.998941 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:34:08.998954 | orchestrator | 2025-07-05 22:34:08.998967 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-05 22:34:08.998980 | orchestrator | Saturday 05 July 2025 22:34:02 +0000 (0:00:00.596) 0:00:04.702 ********* 2025-07-05 22:34:08.998991 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:34:08.999002 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:34:08.999013 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:34:08.999024 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:34:08.999035 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:34:08.999046 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:34:08.999056 | orchestrator | 2025-07-05 22:34:08.999067 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-05 22:34:08.999078 | orchestrator | Saturday 05 July 2025 22:34:03 +0000 (0:00:00.800) 0:00:05.502 ********* 2025-07-05 22:34:08.999089 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-07-05 22:34:08.999100 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-07-05 22:34:08.999111 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-07-05 22:34:08.999122 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-07-05 22:34:08.999186 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-07-05 22:34:08.999208 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-07-05 22:34:08.999227 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-07-05 22:34:08.999243 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-07-05 22:34:08.999254 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-07-05 22:34:08.999265 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-07-05 22:34:08.999275 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-07-05 22:34:08.999286 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-07-05 22:34:08.999296 | orchestrator | 2025-07-05 22:34:08.999307 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-05 22:34:08.999318 | orchestrator | Saturday 05 July 2025 22:34:04 +0000 (0:00:01.218) 0:00:06.720 ********* 2025-07-05 22:34:08.999328 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:34:08.999339 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:34:08.999349 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:34:08.999360 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:34:08.999370 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:34:08.999381 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:34:08.999391 | orchestrator | 2025-07-05 22:34:08.999402 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-05 22:34:08.999413 | orchestrator | Saturday 05 July 2025 22:34:05 +0000 (0:00:01.209) 0:00:07.930 ********* 2025-07-05 22:34:08.999424 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-07-05 22:34:08.999435 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-07-05 22:34:08.999445 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-07-05 22:34:08.999456 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-07-05 22:34:08.999486 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-07-05 22:34:08.999498 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-07-05 22:34:08.999509 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-07-05 22:34:08.999519 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-07-05 22:34:08.999539 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-07-05 22:34:08.999550 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-07-05 22:34:08.999561 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-07-05 22:34:08.999572 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-07-05 22:34:08.999583 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-07-05 22:34:08.999593 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-07-05 22:34:08.999604 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-07-05 22:34:08.999614 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-07-05 22:34:08.999625 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-07-05 22:34:08.999636 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-07-05 22:34:08.999647 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-07-05 22:34:08.999657 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-07-05 22:34:08.999668 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-07-05 22:34:08.999678 | orchestrator | 2025-07-05 22:34:08.999689 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-05 22:34:08.999701 | orchestrator | Saturday 05 July 2025 22:34:06 +0000 (0:00:01.283) 0:00:09.213 ********* 2025-07-05 22:34:08.999712 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:34:08.999723 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:34:08.999733 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:34:08.999744 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:34:08.999754 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:34:08.999765 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:34:08.999775 | orchestrator | 2025-07-05 22:34:08.999786 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-05 22:34:08.999797 | orchestrator | Saturday 05 July 2025 22:34:07 +0000 (0:00:00.143) 0:00:09.357 ********* 2025-07-05 22:34:08.999808 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:34:08.999818 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:34:08.999829 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:34:08.999839 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:34:08.999850 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:34:08.999861 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:34:08.999871 | orchestrator | 2025-07-05 22:34:08.999882 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-05 22:34:08.999893 | orchestrator | Saturday 05 July 2025 22:34:07 +0000 (0:00:00.573) 0:00:09.930 ********* 2025-07-05 22:34:08.999903 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:34:08.999914 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:34:08.999924 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:34:08.999935 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:34:08.999954 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:34:08.999965 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:34:08.999976 | orchestrator | 2025-07-05 22:34:08.999987 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-05 22:34:08.999997 | orchestrator | Saturday 05 July 2025 22:34:07 +0000 (0:00:00.173) 0:00:10.104 ********* 2025-07-05 22:34:09.000008 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-05 22:34:09.000019 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:34:09.000030 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-05 22:34:09.000055 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:34:09.000067 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-05 22:34:09.000087 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:34:09.000098 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-05 22:34:09.000109 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:34:09.000153 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-05 22:34:09.000167 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:34:09.000178 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-05 22:34:09.000188 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:34:09.000199 | orchestrator | 2025-07-05 22:34:09.000210 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-05 22:34:09.000221 | orchestrator | Saturday 05 July 2025 22:34:08 +0000 (0:00:00.720) 0:00:10.824 ********* 2025-07-05 22:34:09.000231 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:34:09.000242 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:34:09.000252 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:34:09.000263 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:34:09.000274 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:34:09.000284 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:34:09.000295 | orchestrator | 2025-07-05 22:34:09.000306 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-05 22:34:09.000316 | orchestrator | Saturday 05 July 2025 22:34:08 +0000 (0:00:00.159) 0:00:10.984 ********* 2025-07-05 22:34:09.000327 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:34:09.000338 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:34:09.000349 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:34:09.000359 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:34:09.000370 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:34:09.000381 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:34:09.000391 | orchestrator | 2025-07-05 22:34:09.000402 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-05 22:34:09.000413 | orchestrator | Saturday 05 July 2025 22:34:08 +0000 (0:00:00.154) 0:00:11.138 ********* 2025-07-05 22:34:09.000424 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:34:09.000434 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:34:09.000445 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:34:09.000456 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:34:09.000474 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:34:10.072498 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:34:10.072608 | orchestrator | 2025-07-05 22:34:10.072624 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-05 22:34:10.072637 | orchestrator | Saturday 05 July 2025 22:34:08 +0000 (0:00:00.149) 0:00:11.288 ********* 2025-07-05 22:34:10.072648 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:34:10.072659 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:34:10.072670 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:34:10.072700 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:34:10.072711 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:34:10.072722 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:34:10.072732 | orchestrator | 2025-07-05 22:34:10.072743 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-05 22:34:10.072754 | orchestrator | Saturday 05 July 2025 22:34:09 +0000 (0:00:00.638) 0:00:11.926 ********* 2025-07-05 22:34:10.072765 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:34:10.072775 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:34:10.072786 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:34:10.072796 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:34:10.072808 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:34:10.072819 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:34:10.072829 | orchestrator | 2025-07-05 22:34:10.072840 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:34:10.072852 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:34:10.072864 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:34:10.072898 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:34:10.072910 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:34:10.072921 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:34:10.072931 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:34:10.072942 | orchestrator | 2025-07-05 22:34:10.072952 | orchestrator | 2025-07-05 22:34:10.072963 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:34:10.072974 | orchestrator | Saturday 05 July 2025 22:34:09 +0000 (0:00:00.220) 0:00:12.147 ********* 2025-07-05 22:34:10.072984 | orchestrator | =============================================================================== 2025-07-05 22:34:10.072995 | orchestrator | Gathering Facts --------------------------------------------------------- 2.98s 2025-07-05 22:34:10.073005 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2025-07-05 22:34:10.073016 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2025-07-05 22:34:10.073027 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.21s 2025-07-05 22:34:10.073037 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2025-07-05 22:34:10.073048 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-07-05 22:34:10.073058 | orchestrator | Do not require tty for all users ---------------------------------------- 0.67s 2025-07-05 22:34:10.073069 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-07-05 22:34:10.073080 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-07-05 22:34:10.073090 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-07-05 22:34:10.073101 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-07-05 22:34:10.073111 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-07-05 22:34:10.073122 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-07-05 22:34:10.073157 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-07-05 22:34:10.073169 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-07-05 22:34:10.073179 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-07-05 22:34:10.073190 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-07-05 22:34:10.073202 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2025-07-05 22:34:10.334905 | orchestrator | + osism apply --environment custom facts 2025-07-05 22:34:12.136111 | orchestrator | 2025-07-05 22:34:12 | INFO  | Trying to run play facts in environment custom 2025-07-05 22:34:22.349400 | orchestrator | 2025-07-05 22:34:22 | INFO  | Task 09cb608e-4b94-4700-aff9-e68028037ec6 (facts) was prepared for execution. 2025-07-05 22:34:22.349520 | orchestrator | 2025-07-05 22:34:22 | INFO  | It takes a moment until task 09cb608e-4b94-4700-aff9-e68028037ec6 (facts) has been started and output is visible here. 2025-07-05 22:35:03.753815 | orchestrator | 2025-07-05 22:35:03.753937 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-07-05 22:35:03.753953 | orchestrator | 2025-07-05 22:35:03.753966 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-05 22:35:03.753978 | orchestrator | Saturday 05 July 2025 22:34:26 +0000 (0:00:00.084) 0:00:00.085 ********* 2025-07-05 22:35:03.754012 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:03.754148 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:35:03.754161 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:03.754172 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:03.754182 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:35:03.754202 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:35:03.754213 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:03.754224 | orchestrator | 2025-07-05 22:35:03.754235 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-07-05 22:35:03.754245 | orchestrator | Saturday 05 July 2025 22:34:27 +0000 (0:00:01.423) 0:00:01.508 ********* 2025-07-05 22:35:03.754256 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:03.754267 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:03.754277 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:35:03.754288 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:03.754298 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:03.754309 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:35:03.754320 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:35:03.754330 | orchestrator | 2025-07-05 22:35:03.754341 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-07-05 22:35:03.754354 | orchestrator | 2025-07-05 22:35:03.754366 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-05 22:35:03.754379 | orchestrator | Saturday 05 July 2025 22:34:28 +0000 (0:00:01.288) 0:00:02.796 ********* 2025-07-05 22:35:03.754392 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:03.754404 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:03.754416 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:03.754428 | orchestrator | 2025-07-05 22:35:03.754441 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-05 22:35:03.754454 | orchestrator | Saturday 05 July 2025 22:34:28 +0000 (0:00:00.110) 0:00:02.907 ********* 2025-07-05 22:35:03.754467 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:03.754479 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:03.754491 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:03.754503 | orchestrator | 2025-07-05 22:35:03.754515 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-05 22:35:03.754528 | orchestrator | Saturday 05 July 2025 22:34:29 +0000 (0:00:00.206) 0:00:03.113 ********* 2025-07-05 22:35:03.754541 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:03.754553 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:03.754565 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:03.754577 | orchestrator | 2025-07-05 22:35:03.754590 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-05 22:35:03.754603 | orchestrator | Saturday 05 July 2025 22:34:29 +0000 (0:00:00.183) 0:00:03.297 ********* 2025-07-05 22:35:03.754635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:35:03.754650 | orchestrator | 2025-07-05 22:35:03.754664 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-05 22:35:03.754676 | orchestrator | Saturday 05 July 2025 22:34:29 +0000 (0:00:00.134) 0:00:03.432 ********* 2025-07-05 22:35:03.754688 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:03.754701 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:03.754712 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:03.754723 | orchestrator | 2025-07-05 22:35:03.754734 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-05 22:35:03.754744 | orchestrator | Saturday 05 July 2025 22:34:29 +0000 (0:00:00.448) 0:00:03.880 ********* 2025-07-05 22:35:03.754755 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:35:03.754766 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:35:03.754777 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:35:03.754787 | orchestrator | 2025-07-05 22:35:03.754798 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-05 22:35:03.754819 | orchestrator | Saturday 05 July 2025 22:34:30 +0000 (0:00:00.122) 0:00:04.003 ********* 2025-07-05 22:35:03.754830 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:03.754841 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:03.754851 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:03.754862 | orchestrator | 2025-07-05 22:35:03.754873 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-05 22:35:03.754883 | orchestrator | Saturday 05 July 2025 22:34:31 +0000 (0:00:01.034) 0:00:05.037 ********* 2025-07-05 22:35:03.754894 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:03.754904 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:03.754915 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:03.754926 | orchestrator | 2025-07-05 22:35:03.754936 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-05 22:35:03.754947 | orchestrator | Saturday 05 July 2025 22:34:31 +0000 (0:00:00.467) 0:00:05.505 ********* 2025-07-05 22:35:03.754958 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:03.754968 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:03.754979 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:03.754990 | orchestrator | 2025-07-05 22:35:03.755000 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-05 22:35:03.755011 | orchestrator | Saturday 05 July 2025 22:34:32 +0000 (0:00:01.081) 0:00:06.587 ********* 2025-07-05 22:35:03.755023 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:03.755033 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:03.755044 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:03.755054 | orchestrator | 2025-07-05 22:35:03.755065 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-07-05 22:35:03.755099 | orchestrator | Saturday 05 July 2025 22:34:46 +0000 (0:00:13.733) 0:00:20.320 ********* 2025-07-05 22:35:03.755111 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:35:03.755122 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:35:03.755133 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:35:03.755158 | orchestrator | 2025-07-05 22:35:03.755180 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-07-05 22:35:03.755260 | orchestrator | Saturday 05 July 2025 22:34:46 +0000 (0:00:00.109) 0:00:20.430 ********* 2025-07-05 22:35:03.755275 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:03.755286 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:03.755297 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:03.755308 | orchestrator | 2025-07-05 22:35:03.755318 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-05 22:35:03.755329 | orchestrator | Saturday 05 July 2025 22:34:54 +0000 (0:00:07.700) 0:00:28.130 ********* 2025-07-05 22:35:03.755340 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:03.755357 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:03.755368 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:03.755379 | orchestrator | 2025-07-05 22:35:03.755390 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-05 22:35:03.755401 | orchestrator | Saturday 05 July 2025 22:34:54 +0000 (0:00:00.488) 0:00:28.619 ********* 2025-07-05 22:35:03.755411 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-07-05 22:35:03.755422 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-07-05 22:35:03.755448 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-07-05 22:35:03.755470 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-07-05 22:35:03.755481 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-07-05 22:35:03.755491 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-07-05 22:35:03.755502 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-07-05 22:35:03.755513 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-07-05 22:35:03.755524 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-07-05 22:35:03.755543 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-07-05 22:35:03.755554 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-07-05 22:35:03.755564 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-07-05 22:35:03.755575 | orchestrator | 2025-07-05 22:35:03.755586 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-05 22:35:03.755597 | orchestrator | Saturday 05 July 2025 22:34:58 +0000 (0:00:03.595) 0:00:32.214 ********* 2025-07-05 22:35:03.755607 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:03.755618 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:03.755629 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:03.755640 | orchestrator | 2025-07-05 22:35:03.755651 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-05 22:35:03.755661 | orchestrator | 2025-07-05 22:35:03.755672 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-05 22:35:03.755683 | orchestrator | Saturday 05 July 2025 22:34:59 +0000 (0:00:01.298) 0:00:33.513 ********* 2025-07-05 22:35:03.755694 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:03.755705 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:03.755715 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:03.755726 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:03.755737 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:03.755747 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:03.755758 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:03.755769 | orchestrator | 2025-07-05 22:35:03.755780 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:35:03.755792 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:35:03.755808 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:35:03.755828 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:35:03.755853 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:35:03.755878 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:35:03.755894 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:35:03.755911 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:35:03.755929 | orchestrator | 2025-07-05 22:35:03.755946 | orchestrator | 2025-07-05 22:35:03.755963 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:35:03.755981 | orchestrator | Saturday 05 July 2025 22:35:03 +0000 (0:00:04.154) 0:00:37.668 ********* 2025-07-05 22:35:03.755999 | orchestrator | =============================================================================== 2025-07-05 22:35:03.756017 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.73s 2025-07-05 22:35:03.756036 | orchestrator | Install required packages (Debian) -------------------------------------- 7.70s 2025-07-05 22:35:03.756055 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.15s 2025-07-05 22:35:03.756073 | orchestrator | Copy fact files --------------------------------------------------------- 3.60s 2025-07-05 22:35:03.756115 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2025-07-05 22:35:03.756126 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.30s 2025-07-05 22:35:03.756147 | orchestrator | Copy fact file ---------------------------------------------------------- 1.29s 2025-07-05 22:35:04.008158 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2025-07-05 22:35:04.008304 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-07-05 22:35:04.008319 | orchestrator | Create custom facts directory ------------------------------------------- 0.49s 2025-07-05 22:35:04.008331 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-07-05 22:35:04.008341 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-07-05 22:35:04.008353 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-07-05 22:35:04.008364 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2025-07-05 22:35:04.008375 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-07-05 22:35:04.008386 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-07-05 22:35:04.008398 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-07-05 22:35:04.008409 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-07-05 22:35:04.301719 | orchestrator | + osism apply bootstrap 2025-07-05 22:35:16.200282 | orchestrator | 2025-07-05 22:35:16 | INFO  | Task f0d06d3f-304d-4290-8b86-512510482ac1 (bootstrap) was prepared for execution. 2025-07-05 22:35:16.200402 | orchestrator | 2025-07-05 22:35:16 | INFO  | It takes a moment until task f0d06d3f-304d-4290-8b86-512510482ac1 (bootstrap) has been started and output is visible here. 2025-07-05 22:35:31.867638 | orchestrator | 2025-07-05 22:35:31.867768 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-07-05 22:35:31.867785 | orchestrator | 2025-07-05 22:35:31.867798 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-07-05 22:35:31.867810 | orchestrator | Saturday 05 July 2025 22:35:20 +0000 (0:00:00.161) 0:00:00.161 ********* 2025-07-05 22:35:31.867821 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:31.867833 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:31.867844 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:31.867859 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:31.867877 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:31.867904 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:31.867923 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:31.867941 | orchestrator | 2025-07-05 22:35:31.867958 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-05 22:35:31.867975 | orchestrator | 2025-07-05 22:35:31.867992 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-05 22:35:31.868010 | orchestrator | Saturday 05 July 2025 22:35:20 +0000 (0:00:00.220) 0:00:00.382 ********* 2025-07-05 22:35:31.868027 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:31.868115 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:31.868139 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:31.868156 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:31.868169 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:31.868182 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:31.868195 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:31.868207 | orchestrator | 2025-07-05 22:35:31.868220 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-07-05 22:35:31.868232 | orchestrator | 2025-07-05 22:35:31.868245 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-05 22:35:31.868258 | orchestrator | Saturday 05 July 2025 22:35:23 +0000 (0:00:03.519) 0:00:03.901 ********* 2025-07-05 22:35:31.868271 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-05 22:35:31.868284 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-05 22:35:31.868297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-07-05 22:35:31.868309 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-05 22:35:31.868347 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-05 22:35:31.868360 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-05 22:35:31.868373 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-05 22:35:31.868402 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-05 22:35:31.868415 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-05 22:35:31.868427 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-05 22:35:31.868439 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-07-05 22:35:31.868452 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-05 22:35:31.868465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-05 22:35:31.868477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-07-05 22:35:31.868489 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-07-05 22:35:31.868500 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:35:31.868519 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-05 22:35:31.868546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-05 22:35:31.868566 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-05 22:35:31.868583 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-05 22:35:31.868601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-05 22:35:31.868616 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-05 22:35:31.868634 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-05 22:35:31.868652 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-07-05 22:35:31.868670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-05 22:35:31.868690 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-05 22:35:31.868708 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-05 22:35:31.868724 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:35:31.868736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 22:35:31.868746 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-05 22:35:31.868757 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-07-05 22:35:31.868778 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-05 22:35:31.868803 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-07-05 22:35:31.868825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 22:35:31.868842 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-07-05 22:35:31.868859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 22:35:31.868876 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:35:31.868894 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-05 22:35:31.868911 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-07-05 22:35:31.868930 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-05 22:35:31.868945 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-07-05 22:35:31.868956 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-05 22:35:31.868966 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:35:31.868977 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-07-05 22:35:31.868987 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-05 22:35:31.868998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-07-05 22:35:31.869030 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-07-05 22:35:31.869042 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:35:31.869086 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-05 22:35:31.869110 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-07-05 22:35:31.869122 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-07-05 22:35:31.869132 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-07-05 22:35:31.869143 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:35:31.869153 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-07-05 22:35:31.869164 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-07-05 22:35:31.869175 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:35:31.869185 | orchestrator | 2025-07-05 22:35:31.869196 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-07-05 22:35:31.869207 | orchestrator | 2025-07-05 22:35:31.869217 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-07-05 22:35:31.869228 | orchestrator | Saturday 05 July 2025 22:35:24 +0000 (0:00:00.461) 0:00:04.362 ********* 2025-07-05 22:35:31.869239 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:31.869249 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:31.869260 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:31.869271 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:31.869281 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:31.869292 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:31.869302 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:31.869312 | orchestrator | 2025-07-05 22:35:31.869323 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-07-05 22:35:31.869334 | orchestrator | Saturday 05 July 2025 22:35:25 +0000 (0:00:01.435) 0:00:05.798 ********* 2025-07-05 22:35:31.869345 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:31.869355 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:31.869366 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:31.869376 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:31.869387 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:31.869397 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:31.869408 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:31.869418 | orchestrator | 2025-07-05 22:35:31.869429 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-07-05 22:35:31.869440 | orchestrator | Saturday 05 July 2025 22:35:27 +0000 (0:00:01.328) 0:00:07.126 ********* 2025-07-05 22:35:31.869452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:35:31.869465 | orchestrator | 2025-07-05 22:35:31.869476 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-07-05 22:35:31.869487 | orchestrator | Saturday 05 July 2025 22:35:27 +0000 (0:00:00.262) 0:00:07.388 ********* 2025-07-05 22:35:31.869498 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:35:31.869508 | orchestrator | changed: [testbed-manager] 2025-07-05 22:35:31.869519 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:35:31.869530 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:31.869540 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:31.869551 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:35:31.869562 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:31.869572 | orchestrator | 2025-07-05 22:35:31.869583 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-07-05 22:35:31.869593 | orchestrator | Saturday 05 July 2025 22:35:29 +0000 (0:00:01.905) 0:00:09.294 ********* 2025-07-05 22:35:31.869604 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:35:31.869616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:35:31.869629 | orchestrator | 2025-07-05 22:35:31.869640 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-07-05 22:35:31.869650 | orchestrator | Saturday 05 July 2025 22:35:29 +0000 (0:00:00.293) 0:00:09.588 ********* 2025-07-05 22:35:31.869668 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:35:31.869684 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:35:31.869709 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:35:31.869732 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:31.869750 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:31.869768 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:31.869785 | orchestrator | 2025-07-05 22:35:31.869803 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-07-05 22:35:31.869821 | orchestrator | Saturday 05 July 2025 22:35:30 +0000 (0:00:01.052) 0:00:10.640 ********* 2025-07-05 22:35:31.869848 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:35:31.869866 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:31.869884 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:35:31.869903 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:31.869920 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:31.869938 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:35:31.869953 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:35:31.869964 | orchestrator | 2025-07-05 22:35:31.869975 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-07-05 22:35:31.869985 | orchestrator | Saturday 05 July 2025 22:35:31 +0000 (0:00:00.595) 0:00:11.236 ********* 2025-07-05 22:35:31.869996 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:35:31.870006 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:35:31.870096 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:35:31.870110 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:35:31.870121 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:35:31.870138 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:35:31.870156 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:31.870183 | orchestrator | 2025-07-05 22:35:31.870204 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-05 22:35:31.870222 | orchestrator | Saturday 05 July 2025 22:35:31 +0000 (0:00:00.430) 0:00:11.666 ********* 2025-07-05 22:35:31.870241 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:35:31.870260 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:35:31.870294 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:35:44.089924 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:35:44.090136 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:35:44.090158 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:35:44.090170 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:35:44.090181 | orchestrator | 2025-07-05 22:35:44.090193 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-05 22:35:44.090207 | orchestrator | Saturday 05 July 2025 22:35:31 +0000 (0:00:00.207) 0:00:11.874 ********* 2025-07-05 22:35:44.090220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:35:44.090248 | orchestrator | 2025-07-05 22:35:44.090260 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-05 22:35:44.090272 | orchestrator | Saturday 05 July 2025 22:35:32 +0000 (0:00:00.299) 0:00:12.174 ********* 2025-07-05 22:35:44.090283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:35:44.090294 | orchestrator | 2025-07-05 22:35:44.090305 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-05 22:35:44.090316 | orchestrator | Saturday 05 July 2025 22:35:32 +0000 (0:00:00.298) 0:00:12.473 ********* 2025-07-05 22:35:44.090327 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.090339 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:44.090349 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:44.090383 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:44.090394 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:44.090405 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:44.090416 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:44.090426 | orchestrator | 2025-07-05 22:35:44.090437 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-05 22:35:44.090448 | orchestrator | Saturday 05 July 2025 22:35:34 +0000 (0:00:01.483) 0:00:13.956 ********* 2025-07-05 22:35:44.090461 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:35:44.090473 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:35:44.090486 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:35:44.090498 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:35:44.090511 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:35:44.090523 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:35:44.090536 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:35:44.090548 | orchestrator | 2025-07-05 22:35:44.090561 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-05 22:35:44.090573 | orchestrator | Saturday 05 July 2025 22:35:34 +0000 (0:00:00.204) 0:00:14.161 ********* 2025-07-05 22:35:44.090585 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.090598 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:44.090610 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:44.090623 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:44.090636 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:44.090648 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:44.090661 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:44.090673 | orchestrator | 2025-07-05 22:35:44.090686 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-05 22:35:44.090699 | orchestrator | Saturday 05 July 2025 22:35:34 +0000 (0:00:00.526) 0:00:14.688 ********* 2025-07-05 22:35:44.090712 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:35:44.090724 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:35:44.090737 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:35:44.090750 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:35:44.090762 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:35:44.090775 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:35:44.090787 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:35:44.090799 | orchestrator | 2025-07-05 22:35:44.090812 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-05 22:35:44.090827 | orchestrator | Saturday 05 July 2025 22:35:34 +0000 (0:00:00.226) 0:00:14.914 ********* 2025-07-05 22:35:44.090839 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.090850 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:35:44.090860 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:35:44.090871 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:35:44.090881 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:44.090892 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:44.090903 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:44.090913 | orchestrator | 2025-07-05 22:35:44.090924 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-05 22:35:44.090935 | orchestrator | Saturday 05 July 2025 22:35:35 +0000 (0:00:00.533) 0:00:15.448 ********* 2025-07-05 22:35:44.090946 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.090957 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:35:44.090968 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:44.090978 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:44.090989 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:44.091000 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:35:44.091010 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:35:44.091021 | orchestrator | 2025-07-05 22:35:44.091032 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-05 22:35:44.091064 | orchestrator | Saturday 05 July 2025 22:35:36 +0000 (0:00:01.148) 0:00:16.597 ********* 2025-07-05 22:35:44.091127 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.091159 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:44.091177 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:44.091196 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:44.091214 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:44.091231 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:44.091242 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:44.091253 | orchestrator | 2025-07-05 22:35:44.091263 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-05 22:35:44.091274 | orchestrator | Saturday 05 July 2025 22:35:37 +0000 (0:00:01.163) 0:00:17.761 ********* 2025-07-05 22:35:44.091308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:35:44.091328 | orchestrator | 2025-07-05 22:35:44.091345 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-05 22:35:44.091363 | orchestrator | Saturday 05 July 2025 22:35:38 +0000 (0:00:00.401) 0:00:18.163 ********* 2025-07-05 22:35:44.091380 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:35:44.091398 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:35:44.091413 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:35:44.091428 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:35:44.091445 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:35:44.091460 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:35:44.091477 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:35:44.091494 | orchestrator | 2025-07-05 22:35:44.091512 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-05 22:35:44.091530 | orchestrator | Saturday 05 July 2025 22:35:39 +0000 (0:00:01.368) 0:00:19.531 ********* 2025-07-05 22:35:44.091548 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.091566 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:44.091583 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:44.091601 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:44.091619 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:44.091634 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:44.091645 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:44.091656 | orchestrator | 2025-07-05 22:35:44.091667 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-05 22:35:44.091678 | orchestrator | Saturday 05 July 2025 22:35:39 +0000 (0:00:00.237) 0:00:19.769 ********* 2025-07-05 22:35:44.091688 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.091699 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:44.091710 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:44.091720 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:44.091731 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:44.091741 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:44.091752 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:44.091763 | orchestrator | 2025-07-05 22:35:44.091773 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-05 22:35:44.091784 | orchestrator | Saturday 05 July 2025 22:35:40 +0000 (0:00:00.206) 0:00:19.975 ********* 2025-07-05 22:35:44.091795 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.091805 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:44.091816 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:44.091826 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:44.091837 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:44.091848 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:44.091858 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:44.091869 | orchestrator | 2025-07-05 22:35:44.091880 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-05 22:35:44.091891 | orchestrator | Saturday 05 July 2025 22:35:40 +0000 (0:00:00.233) 0:00:20.209 ********* 2025-07-05 22:35:44.091902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:35:44.091926 | orchestrator | 2025-07-05 22:35:44.091937 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-05 22:35:44.091947 | orchestrator | Saturday 05 July 2025 22:35:40 +0000 (0:00:00.278) 0:00:20.487 ********* 2025-07-05 22:35:44.091958 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.091969 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:44.091979 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:44.091990 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:44.092001 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:44.092011 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:44.092022 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:44.092054 | orchestrator | 2025-07-05 22:35:44.092066 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-05 22:35:44.092077 | orchestrator | Saturday 05 July 2025 22:35:41 +0000 (0:00:00.532) 0:00:21.019 ********* 2025-07-05 22:35:44.092088 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:35:44.092099 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:35:44.092110 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:35:44.092120 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:35:44.092131 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:35:44.092141 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:35:44.092152 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:35:44.092162 | orchestrator | 2025-07-05 22:35:44.092173 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-05 22:35:44.092191 | orchestrator | Saturday 05 July 2025 22:35:41 +0000 (0:00:00.228) 0:00:21.247 ********* 2025-07-05 22:35:44.092202 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.092213 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:35:44.092223 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:44.092234 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:35:44.092244 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:35:44.092255 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:44.092265 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:44.092276 | orchestrator | 2025-07-05 22:35:44.092286 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-05 22:35:44.092297 | orchestrator | Saturday 05 July 2025 22:35:42 +0000 (0:00:01.059) 0:00:22.307 ********* 2025-07-05 22:35:44.092308 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.092318 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:35:44.092329 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:35:44.092339 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:35:44.092350 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:44.092360 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:44.092371 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:35:44.092381 | orchestrator | 2025-07-05 22:35:44.092392 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-05 22:35:44.092403 | orchestrator | Saturday 05 July 2025 22:35:42 +0000 (0:00:00.576) 0:00:22.884 ********* 2025-07-05 22:35:44.092414 | orchestrator | ok: [testbed-manager] 2025-07-05 22:35:44.092424 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:35:44.092435 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:35:44.092446 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:35:44.092466 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:36:22.137536 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.137651 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:36:22.137666 | orchestrator | 2025-07-05 22:36:22.137679 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-05 22:36:22.137692 | orchestrator | Saturday 05 July 2025 22:35:44 +0000 (0:00:01.123) 0:00:24.007 ********* 2025-07-05 22:36:22.137702 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.137713 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.137724 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.137734 | orchestrator | changed: [testbed-manager] 2025-07-05 22:36:22.137745 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:36:22.137779 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:36:22.137790 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:36:22.137801 | orchestrator | 2025-07-05 22:36:22.137812 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-07-05 22:36:22.137823 | orchestrator | Saturday 05 July 2025 22:35:58 +0000 (0:00:14.717) 0:00:38.724 ********* 2025-07-05 22:36:22.137833 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.137844 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.137854 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.137865 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.137875 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.137886 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.137896 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.137906 | orchestrator | 2025-07-05 22:36:22.137917 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-07-05 22:36:22.137927 | orchestrator | Saturday 05 July 2025 22:35:59 +0000 (0:00:00.214) 0:00:38.939 ********* 2025-07-05 22:36:22.137938 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.137948 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.137959 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.137969 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.137979 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.137990 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.138078 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.138092 | orchestrator | 2025-07-05 22:36:22.138105 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-07-05 22:36:22.138118 | orchestrator | Saturday 05 July 2025 22:35:59 +0000 (0:00:00.209) 0:00:39.149 ********* 2025-07-05 22:36:22.138130 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.138140 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.138151 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.138161 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.138172 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.138182 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.138193 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.138204 | orchestrator | 2025-07-05 22:36:22.138214 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-07-05 22:36:22.138226 | orchestrator | Saturday 05 July 2025 22:35:59 +0000 (0:00:00.211) 0:00:39.361 ********* 2025-07-05 22:36:22.138240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:36:22.138253 | orchestrator | 2025-07-05 22:36:22.138264 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-07-05 22:36:22.138275 | orchestrator | Saturday 05 July 2025 22:35:59 +0000 (0:00:00.270) 0:00:39.631 ********* 2025-07-05 22:36:22.138286 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.138297 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.138307 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.138318 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.138328 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.138339 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.138349 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.138360 | orchestrator | 2025-07-05 22:36:22.138371 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-07-05 22:36:22.138381 | orchestrator | Saturday 05 July 2025 22:36:01 +0000 (0:00:01.716) 0:00:41.347 ********* 2025-07-05 22:36:22.138392 | orchestrator | changed: [testbed-manager] 2025-07-05 22:36:22.138403 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:36:22.138414 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:36:22.138424 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:36:22.138435 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:36:22.138446 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:36:22.138456 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:36:22.138476 | orchestrator | 2025-07-05 22:36:22.138487 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-07-05 22:36:22.138498 | orchestrator | Saturday 05 July 2025 22:36:02 +0000 (0:00:01.081) 0:00:42.428 ********* 2025-07-05 22:36:22.138508 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.138519 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.138530 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.138540 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.138551 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.138562 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.138572 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.138584 | orchestrator | 2025-07-05 22:36:22.138595 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-07-05 22:36:22.138605 | orchestrator | Saturday 05 July 2025 22:36:03 +0000 (0:00:00.805) 0:00:43.233 ********* 2025-07-05 22:36:22.138617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:36:22.138629 | orchestrator | 2025-07-05 22:36:22.138640 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-07-05 22:36:22.138652 | orchestrator | Saturday 05 July 2025 22:36:03 +0000 (0:00:00.282) 0:00:43.516 ********* 2025-07-05 22:36:22.138662 | orchestrator | changed: [testbed-manager] 2025-07-05 22:36:22.138673 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:36:22.138684 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:36:22.138694 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:36:22.138705 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:36:22.138716 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:36:22.138726 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:36:22.138737 | orchestrator | 2025-07-05 22:36:22.138764 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-07-05 22:36:22.138775 | orchestrator | Saturday 05 July 2025 22:36:04 +0000 (0:00:01.035) 0:00:44.552 ********* 2025-07-05 22:36:22.138786 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:36:22.138797 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:36:22.138807 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:36:22.138818 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:36:22.138829 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:36:22.138839 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:36:22.138849 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:36:22.138860 | orchestrator | 2025-07-05 22:36:22.138871 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-07-05 22:36:22.138882 | orchestrator | Saturday 05 July 2025 22:36:04 +0000 (0:00:00.327) 0:00:44.879 ********* 2025-07-05 22:36:22.138892 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:36:22.138903 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:36:22.138913 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:36:22.138924 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:36:22.138934 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:36:22.138945 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:36:22.138956 | orchestrator | changed: [testbed-manager] 2025-07-05 22:36:22.138966 | orchestrator | 2025-07-05 22:36:22.138977 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-07-05 22:36:22.138988 | orchestrator | Saturday 05 July 2025 22:36:16 +0000 (0:00:11.606) 0:00:56.485 ********* 2025-07-05 22:36:22.139018 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.139029 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.139039 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.139050 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.139061 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.139071 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.139082 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.139092 | orchestrator | 2025-07-05 22:36:22.139103 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-07-05 22:36:22.139121 | orchestrator | Saturday 05 July 2025 22:36:17 +0000 (0:00:01.111) 0:00:57.596 ********* 2025-07-05 22:36:22.139132 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.139142 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.139153 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.139164 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.139174 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.139185 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.139195 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.139206 | orchestrator | 2025-07-05 22:36:22.139216 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-07-05 22:36:22.139227 | orchestrator | Saturday 05 July 2025 22:36:18 +0000 (0:00:01.062) 0:00:58.658 ********* 2025-07-05 22:36:22.139238 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.139249 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.139259 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.139270 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.139280 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.139291 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.139301 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.139312 | orchestrator | 2025-07-05 22:36:22.139323 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-07-05 22:36:22.139334 | orchestrator | Saturday 05 July 2025 22:36:18 +0000 (0:00:00.248) 0:00:58.907 ********* 2025-07-05 22:36:22.139344 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.139355 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.139365 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.139376 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.139386 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.139397 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.139407 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.139418 | orchestrator | 2025-07-05 22:36:22.139429 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-07-05 22:36:22.139439 | orchestrator | Saturday 05 July 2025 22:36:19 +0000 (0:00:00.249) 0:00:59.156 ********* 2025-07-05 22:36:22.139466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:36:22.139478 | orchestrator | 2025-07-05 22:36:22.139489 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-07-05 22:36:22.139499 | orchestrator | Saturday 05 July 2025 22:36:19 +0000 (0:00:00.335) 0:00:59.492 ********* 2025-07-05 22:36:22.139510 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.139520 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.139531 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.139542 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.139557 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.139567 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.139578 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.139589 | orchestrator | 2025-07-05 22:36:22.139600 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-07-05 22:36:22.139610 | orchestrator | Saturday 05 July 2025 22:36:21 +0000 (0:00:01.631) 0:01:01.124 ********* 2025-07-05 22:36:22.139621 | orchestrator | changed: [testbed-manager] 2025-07-05 22:36:22.139632 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:36:22.139642 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:36:22.139652 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:36:22.139663 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:36:22.139673 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:36:22.139684 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:36:22.139694 | orchestrator | 2025-07-05 22:36:22.139705 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-07-05 22:36:22.139716 | orchestrator | Saturday 05 July 2025 22:36:21 +0000 (0:00:00.694) 0:01:01.818 ********* 2025-07-05 22:36:22.139732 | orchestrator | ok: [testbed-manager] 2025-07-05 22:36:22.139743 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:36:22.139754 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:36:22.139764 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:36:22.139775 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:36:22.139785 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:36:22.139796 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:36:22.139806 | orchestrator | 2025-07-05 22:36:22.139823 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-07-05 22:38:40.793523 | orchestrator | Saturday 05 July 2025 22:36:22 +0000 (0:00:00.243) 0:01:02.062 ********* 2025-07-05 22:38:40.793637 | orchestrator | ok: [testbed-manager] 2025-07-05 22:38:40.793654 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:38:40.793662 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:38:40.793669 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:38:40.793677 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:38:40.793684 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:38:40.793691 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:38:40.793699 | orchestrator | 2025-07-05 22:38:40.793707 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-07-05 22:38:40.793715 | orchestrator | Saturday 05 July 2025 22:36:23 +0000 (0:00:01.133) 0:01:03.196 ********* 2025-07-05 22:38:40.793722 | orchestrator | changed: [testbed-manager] 2025-07-05 22:38:40.793730 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:38:40.793738 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:38:40.793745 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:38:40.793752 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:38:40.793759 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:38:40.793766 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:38:40.793774 | orchestrator | 2025-07-05 22:38:40.793781 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-07-05 22:38:40.793789 | orchestrator | Saturday 05 July 2025 22:36:24 +0000 (0:00:01.571) 0:01:04.768 ********* 2025-07-05 22:38:40.793796 | orchestrator | ok: [testbed-manager] 2025-07-05 22:38:40.793803 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:38:40.793810 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:38:40.793817 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:38:40.793824 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:38:40.793832 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:38:40.793904 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:38:40.793912 | orchestrator | 2025-07-05 22:38:40.793919 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-07-05 22:38:40.793927 | orchestrator | Saturday 05 July 2025 22:36:26 +0000 (0:00:02.160) 0:01:06.929 ********* 2025-07-05 22:38:40.793934 | orchestrator | ok: [testbed-manager] 2025-07-05 22:38:40.793942 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:38:40.793950 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:38:40.793957 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:38:40.793964 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:38:40.793972 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:38:40.793980 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:38:40.793987 | orchestrator | 2025-07-05 22:38:40.793995 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-07-05 22:38:40.794003 | orchestrator | Saturday 05 July 2025 22:37:03 +0000 (0:00:36.873) 0:01:43.802 ********* 2025-07-05 22:38:40.794051 | orchestrator | changed: [testbed-manager] 2025-07-05 22:38:40.794061 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:38:40.794069 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:38:40.794076 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:38:40.794083 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:38:40.794090 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:38:40.794099 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:38:40.794107 | orchestrator | 2025-07-05 22:38:40.794115 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-07-05 22:38:40.794147 | orchestrator | Saturday 05 July 2025 22:38:21 +0000 (0:01:17.644) 0:03:01.447 ********* 2025-07-05 22:38:40.794154 | orchestrator | ok: [testbed-manager] 2025-07-05 22:38:40.794163 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:38:40.794172 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:38:40.794179 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:38:40.794186 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:38:40.794192 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:38:40.794198 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:38:40.794205 | orchestrator | 2025-07-05 22:38:40.794212 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-07-05 22:38:40.794220 | orchestrator | Saturday 05 July 2025 22:38:23 +0000 (0:00:01.748) 0:03:03.195 ********* 2025-07-05 22:38:40.794227 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:38:40.794233 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:38:40.794240 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:38:40.794246 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:38:40.794252 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:38:40.794258 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:38:40.794264 | orchestrator | changed: [testbed-manager] 2025-07-05 22:38:40.794270 | orchestrator | 2025-07-05 22:38:40.794277 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-07-05 22:38:40.794284 | orchestrator | Saturday 05 July 2025 22:38:35 +0000 (0:00:11.907) 0:03:15.102 ********* 2025-07-05 22:38:40.794312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-07-05 22:38:40.794325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-07-05 22:38:40.794356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-07-05 22:38:40.794368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-07-05 22:38:40.794375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-07-05 22:38:40.794381 | orchestrator | 2025-07-05 22:38:40.794387 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-07-05 22:38:40.794393 | orchestrator | Saturday 05 July 2025 22:38:35 +0000 (0:00:00.382) 0:03:15.485 ********* 2025-07-05 22:38:40.794401 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-05 22:38:40.794409 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:38:40.794425 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-05 22:38:40.794431 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-05 22:38:40.794437 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:38:40.794444 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:38:40.794450 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-05 22:38:40.794456 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:38:40.794463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-05 22:38:40.794469 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-05 22:38:40.794476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-05 22:38:40.794483 | orchestrator | 2025-07-05 22:38:40.794489 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-07-05 22:38:40.794496 | orchestrator | Saturday 05 July 2025 22:38:36 +0000 (0:00:00.630) 0:03:16.116 ********* 2025-07-05 22:38:40.794503 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-05 22:38:40.794511 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-05 22:38:40.794517 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-05 22:38:40.794524 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-05 22:38:40.794531 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-05 22:38:40.794537 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-05 22:38:40.794544 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-05 22:38:40.794550 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-05 22:38:40.794557 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-05 22:38:40.794564 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-05 22:38:40.794571 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:38:40.794582 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-05 22:38:40.794590 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-05 22:38:40.794597 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-05 22:38:40.794603 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-05 22:38:40.794610 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-05 22:38:40.794617 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-05 22:38:40.794624 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-05 22:38:40.794631 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-05 22:38:40.794637 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-05 22:38:40.794645 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-05 22:38:40.794659 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-05 22:38:43.789659 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-05 22:38:43.789780 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-05 22:38:43.789793 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:38:43.789805 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-05 22:38:43.789814 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-05 22:38:43.789823 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-05 22:38:43.789890 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-05 22:38:43.789900 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-05 22:38:43.789909 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-05 22:38:43.789918 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-05 22:38:43.789926 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-05 22:38:43.789935 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-05 22:38:43.789944 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:38:43.789952 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-05 22:38:43.789961 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-05 22:38:43.789970 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-05 22:38:43.789978 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-05 22:38:43.789987 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-05 22:38:43.789995 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-05 22:38:43.790003 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-05 22:38:43.790012 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-05 22:38:43.790064 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:38:43.790073 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-05 22:38:43.790082 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-05 22:38:43.790090 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-05 22:38:43.790099 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-05 22:38:43.790107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-05 22:38:43.790117 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-05 22:38:43.790155 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-05 22:38:43.790166 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-05 22:38:43.790174 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-05 22:38:43.790183 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-05 22:38:43.790192 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-05 22:38:43.790202 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-05 22:38:43.790220 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-05 22:38:43.790245 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-05 22:38:43.790256 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-05 22:38:43.790266 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-05 22:38:43.790276 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-05 22:38:43.790286 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-05 22:38:43.790296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-05 22:38:43.790306 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-05 22:38:43.790316 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-05 22:38:43.790343 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-05 22:38:43.790354 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-05 22:38:43.790364 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-05 22:38:43.790374 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-05 22:38:43.790384 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-05 22:38:43.790394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-05 22:38:43.790404 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-05 22:38:43.790415 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-05 22:38:43.790425 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-05 22:38:43.790434 | orchestrator | 2025-07-05 22:38:43.790445 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-07-05 22:38:43.790454 | orchestrator | Saturday 05 July 2025 22:38:40 +0000 (0:00:04.596) 0:03:20.712 ********* 2025-07-05 22:38:43.790464 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-05 22:38:43.790474 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-05 22:38:43.790484 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-05 22:38:43.790494 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-05 22:38:43.790503 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-05 22:38:43.790513 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-05 22:38:43.790523 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-05 22:38:43.790533 | orchestrator | 2025-07-05 22:38:43.790543 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-07-05 22:38:43.790553 | orchestrator | Saturday 05 July 2025 22:38:42 +0000 (0:00:01.458) 0:03:22.171 ********* 2025-07-05 22:38:43.790562 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-05 22:38:43.790570 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:38:43.790579 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-05 22:38:43.790588 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-05 22:38:43.790596 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:38:43.790611 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:38:43.790620 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-05 22:38:43.790628 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:38:43.790637 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-05 22:38:43.790646 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-05 22:38:43.790654 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-05 22:38:43.790663 | orchestrator | 2025-07-05 22:38:43.790671 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-07-05 22:38:43.790680 | orchestrator | Saturday 05 July 2025 22:38:42 +0000 (0:00:00.599) 0:03:22.770 ********* 2025-07-05 22:38:43.790689 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-05 22:38:43.790697 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:38:43.790711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-05 22:38:43.790719 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-05 22:38:43.790728 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:38:43.790737 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:38:43.790745 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-05 22:38:43.790754 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:38:43.790762 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-05 22:38:43.790771 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-05 22:38:43.790779 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-05 22:38:43.790788 | orchestrator | 2025-07-05 22:38:43.790796 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-07-05 22:38:43.790805 | orchestrator | Saturday 05 July 2025 22:38:43 +0000 (0:00:00.660) 0:03:23.430 ********* 2025-07-05 22:38:43.790813 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:38:43.790822 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:38:43.790848 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:38:43.790857 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:38:43.790866 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:38:43.790880 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:38:56.913570 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:38:56.913694 | orchestrator | 2025-07-05 22:38:56.913712 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-07-05 22:38:56.913725 | orchestrator | Saturday 05 July 2025 22:38:43 +0000 (0:00:00.288) 0:03:23.719 ********* 2025-07-05 22:38:56.913736 | orchestrator | ok: [testbed-manager] 2025-07-05 22:38:56.913747 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:38:56.913758 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:38:56.913769 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:38:56.913779 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:38:56.913790 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:38:56.913801 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:38:56.913812 | orchestrator | 2025-07-05 22:38:56.913892 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-07-05 22:38:56.913904 | orchestrator | Saturday 05 July 2025 22:38:50 +0000 (0:00:06.520) 0:03:30.239 ********* 2025-07-05 22:38:56.913915 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-07-05 22:38:56.913927 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-07-05 22:38:56.913938 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:38:56.913949 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-07-05 22:38:56.913984 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:38:56.913995 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-07-05 22:38:56.914006 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:38:56.914075 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-07-05 22:38:56.914088 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:38:56.914099 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-07-05 22:38:56.914110 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:38:56.914121 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:38:56.914131 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-07-05 22:38:56.914142 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:38:56.914153 | orchestrator | 2025-07-05 22:38:56.914163 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-07-05 22:38:56.914174 | orchestrator | Saturday 05 July 2025 22:38:50 +0000 (0:00:00.289) 0:03:30.528 ********* 2025-07-05 22:38:56.914185 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-07-05 22:38:56.914201 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-07-05 22:38:56.914212 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-07-05 22:38:56.914228 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-07-05 22:38:56.914247 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-07-05 22:38:56.914266 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-07-05 22:38:56.914283 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-07-05 22:38:56.914302 | orchestrator | 2025-07-05 22:38:56.914323 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-07-05 22:38:56.914343 | orchestrator | Saturday 05 July 2025 22:38:52 +0000 (0:00:01.767) 0:03:32.295 ********* 2025-07-05 22:38:56.914364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:38:56.914379 | orchestrator | 2025-07-05 22:38:56.914390 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-07-05 22:38:56.914401 | orchestrator | Saturday 05 July 2025 22:38:52 +0000 (0:00:00.492) 0:03:32.788 ********* 2025-07-05 22:38:56.914412 | orchestrator | ok: [testbed-manager] 2025-07-05 22:38:56.914423 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:38:56.914433 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:38:56.914444 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:38:56.914454 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:38:56.914465 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:38:56.914475 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:38:56.914486 | orchestrator | 2025-07-05 22:38:56.914497 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-07-05 22:38:56.914508 | orchestrator | Saturday 05 July 2025 22:38:54 +0000 (0:00:01.239) 0:03:34.027 ********* 2025-07-05 22:38:56.914518 | orchestrator | ok: [testbed-manager] 2025-07-05 22:38:56.914529 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:38:56.914540 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:38:56.914550 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:38:56.914561 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:38:56.914571 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:38:56.914582 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:38:56.914592 | orchestrator | 2025-07-05 22:38:56.914603 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-07-05 22:38:56.914630 | orchestrator | Saturday 05 July 2025 22:38:54 +0000 (0:00:00.653) 0:03:34.680 ********* 2025-07-05 22:38:56.914641 | orchestrator | changed: [testbed-manager] 2025-07-05 22:38:56.914652 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:38:56.914663 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:38:56.914673 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:38:56.914684 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:38:56.914695 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:38:56.914705 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:38:56.914726 | orchestrator | 2025-07-05 22:38:56.914736 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-07-05 22:38:56.914747 | orchestrator | Saturday 05 July 2025 22:38:55 +0000 (0:00:00.594) 0:03:35.275 ********* 2025-07-05 22:38:56.914758 | orchestrator | ok: [testbed-manager] 2025-07-05 22:38:56.914768 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:38:56.914779 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:38:56.914790 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:38:56.914800 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:38:56.914811 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:38:56.914848 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:38:56.914859 | orchestrator | 2025-07-05 22:38:56.914870 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-07-05 22:38:56.914881 | orchestrator | Saturday 05 July 2025 22:38:55 +0000 (0:00:00.556) 0:03:35.831 ********* 2025-07-05 22:38:56.914919 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751753712.5457118, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:38:56.914936 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751753781.503596, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:38:56.914948 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751753781.2272925, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:38:56.914960 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751753773.9525728, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:38:56.914971 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751753786.7183192, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:38:56.914983 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751753776.04953, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:38:56.915002 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751753793.5633903, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:38:56.915032 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751753748.5634346, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:39:21.404717 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751753673.7808754, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:39:21.404902 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751753682.638085, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:39:21.404938 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751753670.3379507, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:39:21.404950 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751753679.36774, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:39:21.404961 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751753670.855754, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:39:21.404994 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751753683.937441, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 22:39:21.405006 | orchestrator | 2025-07-05 22:39:21.405018 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-07-05 22:39:21.405030 | orchestrator | Saturday 05 July 2025 22:38:56 +0000 (0:00:01.000) 0:03:36.831 ********* 2025-07-05 22:39:21.405040 | orchestrator | changed: [testbed-manager] 2025-07-05 22:39:21.405050 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:39:21.405060 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:39:21.405069 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:39:21.405078 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:39:21.405087 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:39:21.405096 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:39:21.405106 | orchestrator | 2025-07-05 22:39:21.405116 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-07-05 22:39:21.405125 | orchestrator | Saturday 05 July 2025 22:38:58 +0000 (0:00:01.140) 0:03:37.972 ********* 2025-07-05 22:39:21.405135 | orchestrator | changed: [testbed-manager] 2025-07-05 22:39:21.405144 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:39:21.405153 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:39:21.405162 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:39:21.405188 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:39:21.405198 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:39:21.405207 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:39:21.405217 | orchestrator | 2025-07-05 22:39:21.405226 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-07-05 22:39:21.405236 | orchestrator | Saturday 05 July 2025 22:38:59 +0000 (0:00:01.194) 0:03:39.167 ********* 2025-07-05 22:39:21.405246 | orchestrator | changed: [testbed-manager] 2025-07-05 22:39:21.405256 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:39:21.405267 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:39:21.405278 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:39:21.405289 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:39:21.405300 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:39:21.405311 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:39:21.405323 | orchestrator | 2025-07-05 22:39:21.405334 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-07-05 22:39:21.405345 | orchestrator | Saturday 05 July 2025 22:39:00 +0000 (0:00:01.180) 0:03:40.347 ********* 2025-07-05 22:39:21.405356 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:39:21.405367 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:39:21.405377 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:39:21.405388 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:39:21.405399 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:39:21.405410 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:39:21.405420 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:39:21.405431 | orchestrator | 2025-07-05 22:39:21.405442 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-07-05 22:39:21.405454 | orchestrator | Saturday 05 July 2025 22:39:00 +0000 (0:00:00.282) 0:03:40.630 ********* 2025-07-05 22:39:21.405472 | orchestrator | ok: [testbed-manager] 2025-07-05 22:39:21.405484 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:39:21.405495 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:39:21.405506 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:39:21.405517 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:39:21.405528 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:39:21.405539 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:39:21.405550 | orchestrator | 2025-07-05 22:39:21.405561 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-07-05 22:39:21.405572 | orchestrator | Saturday 05 July 2025 22:39:01 +0000 (0:00:00.727) 0:03:41.358 ********* 2025-07-05 22:39:21.405586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:39:21.405599 | orchestrator | 2025-07-05 22:39:21.405610 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-07-05 22:39:21.405621 | orchestrator | Saturday 05 July 2025 22:39:01 +0000 (0:00:00.387) 0:03:41.745 ********* 2025-07-05 22:39:21.405632 | orchestrator | ok: [testbed-manager] 2025-07-05 22:39:21.405641 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:39:21.405651 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:39:21.405660 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:39:21.405670 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:39:21.405679 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:39:21.405688 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:39:21.405698 | orchestrator | 2025-07-05 22:39:21.405707 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-07-05 22:39:21.405717 | orchestrator | Saturday 05 July 2025 22:39:09 +0000 (0:00:07.965) 0:03:49.710 ********* 2025-07-05 22:39:21.405726 | orchestrator | ok: [testbed-manager] 2025-07-05 22:39:21.405735 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:39:21.405745 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:39:21.405754 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:39:21.405763 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:39:21.405773 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:39:21.405782 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:39:21.405820 | orchestrator | 2025-07-05 22:39:21.405830 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-07-05 22:39:21.405840 | orchestrator | Saturday 05 July 2025 22:39:11 +0000 (0:00:01.266) 0:03:50.977 ********* 2025-07-05 22:39:21.405849 | orchestrator | ok: [testbed-manager] 2025-07-05 22:39:21.405859 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:39:21.405868 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:39:21.405877 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:39:21.405887 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:39:21.405902 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:39:21.405911 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:39:21.405921 | orchestrator | 2025-07-05 22:39:21.405930 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-07-05 22:39:21.405940 | orchestrator | Saturday 05 July 2025 22:39:12 +0000 (0:00:01.067) 0:03:52.045 ********* 2025-07-05 22:39:21.405950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:39:21.405960 | orchestrator | 2025-07-05 22:39:21.405969 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-07-05 22:39:21.405979 | orchestrator | Saturday 05 July 2025 22:39:12 +0000 (0:00:00.481) 0:03:52.526 ********* 2025-07-05 22:39:21.405988 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:39:21.405998 | orchestrator | changed: [testbed-manager] 2025-07-05 22:39:21.406007 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:39:21.406075 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:39:21.406089 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:39:21.406105 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:39:21.406115 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:39:21.406125 | orchestrator | 2025-07-05 22:39:21.406134 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-07-05 22:39:21.406144 | orchestrator | Saturday 05 July 2025 22:39:20 +0000 (0:00:08.170) 0:04:00.697 ********* 2025-07-05 22:39:21.406153 | orchestrator | changed: [testbed-manager] 2025-07-05 22:39:21.406163 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:39:21.406172 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:39:21.406181 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:39:21.406199 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:40:27.673104 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:40:27.673224 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:40:27.673242 | orchestrator | 2025-07-05 22:40:27.673258 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-07-05 22:40:27.673271 | orchestrator | Saturday 05 July 2025 22:39:21 +0000 (0:00:00.632) 0:04:01.329 ********* 2025-07-05 22:40:27.673283 | orchestrator | changed: [testbed-manager] 2025-07-05 22:40:27.673293 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:40:27.673304 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:40:27.673315 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:40:27.673331 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:40:27.673350 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:40:27.673361 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:40:27.673372 | orchestrator | 2025-07-05 22:40:27.673383 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-07-05 22:40:27.673395 | orchestrator | Saturday 05 July 2025 22:39:22 +0000 (0:00:01.200) 0:04:02.529 ********* 2025-07-05 22:40:27.673405 | orchestrator | changed: [testbed-manager] 2025-07-05 22:40:27.673416 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:40:27.673426 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:40:27.673455 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:40:27.673466 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:40:27.673487 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:40:27.673498 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:40:27.673509 | orchestrator | 2025-07-05 22:40:27.673520 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-07-05 22:40:27.673531 | orchestrator | Saturday 05 July 2025 22:39:23 +0000 (0:00:01.047) 0:04:03.576 ********* 2025-07-05 22:40:27.673541 | orchestrator | ok: [testbed-manager] 2025-07-05 22:40:27.673553 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:40:27.673564 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:40:27.673575 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:40:27.673585 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:40:27.673596 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:40:27.673607 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:40:27.673617 | orchestrator | 2025-07-05 22:40:27.673628 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-07-05 22:40:27.673640 | orchestrator | Saturday 05 July 2025 22:39:23 +0000 (0:00:00.300) 0:04:03.877 ********* 2025-07-05 22:40:27.673651 | orchestrator | ok: [testbed-manager] 2025-07-05 22:40:27.673661 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:40:27.673672 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:40:27.673682 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:40:27.673693 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:40:27.673703 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:40:27.673758 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:40:27.673770 | orchestrator | 2025-07-05 22:40:27.673780 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-07-05 22:40:27.673791 | orchestrator | Saturday 05 July 2025 22:39:24 +0000 (0:00:00.286) 0:04:04.164 ********* 2025-07-05 22:40:27.673801 | orchestrator | ok: [testbed-manager] 2025-07-05 22:40:27.673812 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:40:27.673822 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:40:27.673853 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:40:27.673864 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:40:27.673874 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:40:27.673885 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:40:27.673895 | orchestrator | 2025-07-05 22:40:27.673906 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-07-05 22:40:27.673917 | orchestrator | Saturday 05 July 2025 22:39:24 +0000 (0:00:00.315) 0:04:04.479 ********* 2025-07-05 22:40:27.673927 | orchestrator | ok: [testbed-manager] 2025-07-05 22:40:27.673938 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:40:27.673948 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:40:27.673958 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:40:27.673970 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:40:27.673980 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:40:27.673991 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:40:27.674001 | orchestrator | 2025-07-05 22:40:27.674012 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-07-05 22:40:27.674111 | orchestrator | Saturday 05 July 2025 22:39:30 +0000 (0:00:05.619) 0:04:10.099 ********* 2025-07-05 22:40:27.674134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:40:27.674148 | orchestrator | 2025-07-05 22:40:27.674159 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-07-05 22:40:27.674170 | orchestrator | Saturday 05 July 2025 22:39:30 +0000 (0:00:00.388) 0:04:10.487 ********* 2025-07-05 22:40:27.674180 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-07-05 22:40:27.674191 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-07-05 22:40:27.674202 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:40:27.674213 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-07-05 22:40:27.674223 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-07-05 22:40:27.674234 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-07-05 22:40:27.674244 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-07-05 22:40:27.674255 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:40:27.674271 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-07-05 22:40:27.674288 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-07-05 22:40:27.674299 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:40:27.674310 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-07-05 22:40:27.674321 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-07-05 22:40:27.674331 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:40:27.674342 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-07-05 22:40:27.674352 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-07-05 22:40:27.674363 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:40:27.674373 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:40:27.674403 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-07-05 22:40:27.674415 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-07-05 22:40:27.674425 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:40:27.674436 | orchestrator | 2025-07-05 22:40:27.674447 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-07-05 22:40:27.674457 | orchestrator | Saturday 05 July 2025 22:39:30 +0000 (0:00:00.353) 0:04:10.840 ********* 2025-07-05 22:40:27.674469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:40:27.674479 | orchestrator | 2025-07-05 22:40:27.674490 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-07-05 22:40:27.674510 | orchestrator | Saturday 05 July 2025 22:39:31 +0000 (0:00:00.389) 0:04:11.229 ********* 2025-07-05 22:40:27.674521 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-07-05 22:40:27.674531 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:40:27.674542 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-07-05 22:40:27.674552 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-07-05 22:40:27.674563 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:40:27.674574 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-07-05 22:40:27.674584 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:40:27.674595 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:40:27.674605 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-07-05 22:40:27.674616 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-07-05 22:40:27.674627 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:40:27.674637 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:40:27.674648 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-07-05 22:40:27.674658 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:40:27.674669 | orchestrator | 2025-07-05 22:40:27.674679 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-07-05 22:40:27.674690 | orchestrator | Saturday 05 July 2025 22:39:31 +0000 (0:00:00.318) 0:04:11.548 ********* 2025-07-05 22:40:27.674701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:40:27.674743 | orchestrator | 2025-07-05 22:40:27.674755 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-07-05 22:40:27.674766 | orchestrator | Saturday 05 July 2025 22:39:32 +0000 (0:00:00.528) 0:04:12.077 ********* 2025-07-05 22:40:27.674777 | orchestrator | changed: [testbed-manager] 2025-07-05 22:40:27.674788 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:40:27.674798 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:40:27.674809 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:40:27.674819 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:40:27.674829 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:40:27.674840 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:40:27.674850 | orchestrator | 2025-07-05 22:40:27.674861 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-07-05 22:40:27.674872 | orchestrator | Saturday 05 July 2025 22:40:05 +0000 (0:00:33.559) 0:04:45.637 ********* 2025-07-05 22:40:27.674882 | orchestrator | changed: [testbed-manager] 2025-07-05 22:40:27.674893 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:40:27.674904 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:40:27.674914 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:40:27.674925 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:40:27.674935 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:40:27.674946 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:40:27.674956 | orchestrator | 2025-07-05 22:40:27.674966 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-07-05 22:40:27.674977 | orchestrator | Saturday 05 July 2025 22:40:13 +0000 (0:00:07.655) 0:04:53.293 ********* 2025-07-05 22:40:27.674988 | orchestrator | changed: [testbed-manager] 2025-07-05 22:40:27.674999 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:40:27.675009 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:40:27.675020 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:40:27.675030 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:40:27.675041 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:40:27.675051 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:40:27.675062 | orchestrator | 2025-07-05 22:40:27.675073 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-07-05 22:40:27.675090 | orchestrator | Saturday 05 July 2025 22:40:20 +0000 (0:00:07.313) 0:05:00.607 ********* 2025-07-05 22:40:27.675101 | orchestrator | ok: [testbed-manager] 2025-07-05 22:40:27.675111 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:40:27.675122 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:40:27.675133 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:40:27.675143 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:40:27.675154 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:40:27.675164 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:40:27.675175 | orchestrator | 2025-07-05 22:40:27.675185 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-07-05 22:40:27.675196 | orchestrator | Saturday 05 July 2025 22:40:22 +0000 (0:00:01.592) 0:05:02.199 ********* 2025-07-05 22:40:27.675207 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:40:27.675217 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:40:27.675228 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:40:27.675238 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:40:27.675249 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:40:27.675260 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:40:27.675279 | orchestrator | changed: [testbed-manager] 2025-07-05 22:40:27.675298 | orchestrator | 2025-07-05 22:40:27.675317 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-07-05 22:40:27.675344 | orchestrator | Saturday 05 July 2025 22:40:27 +0000 (0:00:05.391) 0:05:07.591 ********* 2025-07-05 22:40:38.560145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:40:38.560265 | orchestrator | 2025-07-05 22:40:38.560283 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-07-05 22:40:38.560296 | orchestrator | Saturday 05 July 2025 22:40:28 +0000 (0:00:00.412) 0:05:08.003 ********* 2025-07-05 22:40:38.560308 | orchestrator | changed: [testbed-manager] 2025-07-05 22:40:38.560319 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:40:38.560330 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:40:38.560341 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:40:38.560352 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:40:38.560363 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:40:38.560373 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:40:38.560384 | orchestrator | 2025-07-05 22:40:38.560395 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-07-05 22:40:38.560406 | orchestrator | Saturday 05 July 2025 22:40:28 +0000 (0:00:00.736) 0:05:08.740 ********* 2025-07-05 22:40:38.560417 | orchestrator | ok: [testbed-manager] 2025-07-05 22:40:38.560429 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:40:38.560439 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:40:38.560450 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:40:38.560460 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:40:38.560471 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:40:38.560487 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:40:38.560506 | orchestrator | 2025-07-05 22:40:38.560526 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-07-05 22:40:38.560545 | orchestrator | Saturday 05 July 2025 22:40:30 +0000 (0:00:01.670) 0:05:10.411 ********* 2025-07-05 22:40:38.560564 | orchestrator | changed: [testbed-manager] 2025-07-05 22:40:38.560583 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:40:38.560604 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:40:38.560628 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:40:38.560656 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:40:38.560674 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:40:38.560739 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:40:38.560762 | orchestrator | 2025-07-05 22:40:38.560781 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-07-05 22:40:38.560841 | orchestrator | Saturday 05 July 2025 22:40:31 +0000 (0:00:00.783) 0:05:11.194 ********* 2025-07-05 22:40:38.560893 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:40:38.560912 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:40:38.560931 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:40:38.560950 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:40:38.560968 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:40:38.560986 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:40:38.561004 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:40:38.561022 | orchestrator | 2025-07-05 22:40:38.561041 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-07-05 22:40:38.561060 | orchestrator | Saturday 05 July 2025 22:40:31 +0000 (0:00:00.272) 0:05:11.467 ********* 2025-07-05 22:40:38.561078 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:40:38.561096 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:40:38.561107 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:40:38.561118 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:40:38.561128 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:40:38.561138 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:40:38.561149 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:40:38.561159 | orchestrator | 2025-07-05 22:40:38.561170 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-07-05 22:40:38.561181 | orchestrator | Saturday 05 July 2025 22:40:31 +0000 (0:00:00.381) 0:05:11.848 ********* 2025-07-05 22:40:38.561191 | orchestrator | ok: [testbed-manager] 2025-07-05 22:40:38.561202 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:40:38.561213 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:40:38.561223 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:40:38.561233 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:40:38.561245 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:40:38.561255 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:40:38.561265 | orchestrator | 2025-07-05 22:40:38.561276 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-07-05 22:40:38.561294 | orchestrator | Saturday 05 July 2025 22:40:32 +0000 (0:00:00.304) 0:05:12.153 ********* 2025-07-05 22:40:38.561305 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:40:38.561315 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:40:38.561326 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:40:38.561336 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:40:38.561347 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:40:38.561357 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:40:38.561368 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:40:38.561378 | orchestrator | 2025-07-05 22:40:38.561388 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-07-05 22:40:38.561400 | orchestrator | Saturday 05 July 2025 22:40:32 +0000 (0:00:00.251) 0:05:12.404 ********* 2025-07-05 22:40:38.561410 | orchestrator | ok: [testbed-manager] 2025-07-05 22:40:38.561421 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:40:38.561431 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:40:38.561442 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:40:38.561452 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:40:38.561463 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:40:38.561473 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:40:38.561483 | orchestrator | 2025-07-05 22:40:38.561494 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-07-05 22:40:38.561505 | orchestrator | Saturday 05 July 2025 22:40:32 +0000 (0:00:00.322) 0:05:12.726 ********* 2025-07-05 22:40:38.561515 | orchestrator | ok: [testbed-manager] =>  2025-07-05 22:40:38.561526 | orchestrator |  docker_version: 5:27.5.1 2025-07-05 22:40:38.561536 | orchestrator | ok: [testbed-node-0] =>  2025-07-05 22:40:38.561546 | orchestrator |  docker_version: 5:27.5.1 2025-07-05 22:40:38.561557 | orchestrator | ok: [testbed-node-1] =>  2025-07-05 22:40:38.561567 | orchestrator |  docker_version: 5:27.5.1 2025-07-05 22:40:38.561577 | orchestrator | ok: [testbed-node-2] =>  2025-07-05 22:40:38.561588 | orchestrator |  docker_version: 5:27.5.1 2025-07-05 22:40:38.561610 | orchestrator | ok: [testbed-node-3] =>  2025-07-05 22:40:38.561621 | orchestrator |  docker_version: 5:27.5.1 2025-07-05 22:40:38.561654 | orchestrator | ok: [testbed-node-4] =>  2025-07-05 22:40:38.561666 | orchestrator |  docker_version: 5:27.5.1 2025-07-05 22:40:38.561676 | orchestrator | ok: [testbed-node-5] =>  2025-07-05 22:40:38.561687 | orchestrator |  docker_version: 5:27.5.1 2025-07-05 22:40:38.561725 | orchestrator | 2025-07-05 22:40:38.561738 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-07-05 22:40:38.561748 | orchestrator | Saturday 05 July 2025 22:40:33 +0000 (0:00:00.306) 0:05:13.033 ********* 2025-07-05 22:40:38.561759 | orchestrator | ok: [testbed-manager] =>  2025-07-05 22:40:38.561769 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-05 22:40:38.561780 | orchestrator | ok: [testbed-node-0] =>  2025-07-05 22:40:38.561790 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-05 22:40:38.561801 | orchestrator | ok: [testbed-node-1] =>  2025-07-05 22:40:38.561811 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-05 22:40:38.561821 | orchestrator | ok: [testbed-node-2] =>  2025-07-05 22:40:38.561831 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-05 22:40:38.561842 | orchestrator | ok: [testbed-node-3] =>  2025-07-05 22:40:38.561852 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-05 22:40:38.561863 | orchestrator | ok: [testbed-node-4] =>  2025-07-05 22:40:38.561873 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-05 22:40:38.561883 | orchestrator | ok: [testbed-node-5] =>  2025-07-05 22:40:38.561894 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-05 22:40:38.561904 | orchestrator | 2025-07-05 22:40:38.561915 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-07-05 22:40:38.561925 | orchestrator | Saturday 05 July 2025 22:40:33 +0000 (0:00:00.386) 0:05:13.420 ********* 2025-07-05 22:40:38.561936 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:40:38.561946 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:40:38.561956 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:40:38.561967 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:40:38.561977 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:40:38.561987 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:40:38.561998 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:40:38.562008 | orchestrator | 2025-07-05 22:40:38.562117 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-07-05 22:40:38.562131 | orchestrator | Saturday 05 July 2025 22:40:33 +0000 (0:00:00.277) 0:05:13.698 ********* 2025-07-05 22:40:38.562142 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:40:38.562152 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:40:38.562163 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:40:38.562173 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:40:38.562183 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:40:38.562194 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:40:38.562205 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:40:38.562215 | orchestrator | 2025-07-05 22:40:38.562226 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-07-05 22:40:38.562236 | orchestrator | Saturday 05 July 2025 22:40:34 +0000 (0:00:00.239) 0:05:13.938 ********* 2025-07-05 22:40:38.562250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:40:38.562263 | orchestrator | 2025-07-05 22:40:38.562274 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-07-05 22:40:38.562285 | orchestrator | Saturday 05 July 2025 22:40:34 +0000 (0:00:00.402) 0:05:14.341 ********* 2025-07-05 22:40:38.562295 | orchestrator | ok: [testbed-manager] 2025-07-05 22:40:38.562306 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:40:38.562317 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:40:38.562392 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:40:38.562416 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:40:38.562427 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:40:38.562437 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:40:38.562448 | orchestrator | 2025-07-05 22:40:38.562459 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-07-05 22:40:38.562469 | orchestrator | Saturday 05 July 2025 22:40:35 +0000 (0:00:00.777) 0:05:15.119 ********* 2025-07-05 22:40:38.562480 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:40:38.562491 | orchestrator | ok: [testbed-manager] 2025-07-05 22:40:38.562501 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:40:38.562512 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:40:38.562522 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:40:38.562540 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:40:38.562550 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:40:38.562561 | orchestrator | 2025-07-05 22:40:38.562571 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-07-05 22:40:38.562583 | orchestrator | Saturday 05 July 2025 22:40:37 +0000 (0:00:02.759) 0:05:17.878 ********* 2025-07-05 22:40:38.562594 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-07-05 22:40:38.562605 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-07-05 22:40:38.562615 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-07-05 22:40:38.562626 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-07-05 22:40:38.562636 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-07-05 22:40:38.562647 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-07-05 22:40:38.562657 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:40:38.562668 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-07-05 22:40:38.562678 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-07-05 22:40:38.562689 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-07-05 22:40:38.562725 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:40:38.562746 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-07-05 22:40:38.562766 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-07-05 22:40:38.562785 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:40:38.562799 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-07-05 22:40:38.562809 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-07-05 22:40:38.562820 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-07-05 22:40:38.562831 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:40:38.562853 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-07-05 22:41:35.416728 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-07-05 22:41:35.416865 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-07-05 22:41:35.416884 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-07-05 22:41:35.416895 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:41:35.416907 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:41:35.416917 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-07-05 22:41:35.416928 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-07-05 22:41:35.416939 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-07-05 22:41:35.416949 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:41:35.416960 | orchestrator | 2025-07-05 22:41:35.416972 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-07-05 22:41:35.416985 | orchestrator | Saturday 05 July 2025 22:40:38 +0000 (0:00:00.753) 0:05:18.631 ********* 2025-07-05 22:41:35.416995 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:35.417006 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.417017 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.417028 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.417038 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.417049 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.417086 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.417097 | orchestrator | 2025-07-05 22:41:35.417108 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-07-05 22:41:35.417118 | orchestrator | Saturday 05 July 2025 22:40:44 +0000 (0:00:05.956) 0:05:24.588 ********* 2025-07-05 22:41:35.417129 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:35.417139 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.417149 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.417159 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.417170 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.417180 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.417190 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.417201 | orchestrator | 2025-07-05 22:41:35.417211 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-07-05 22:41:35.417222 | orchestrator | Saturday 05 July 2025 22:40:45 +0000 (0:00:01.078) 0:05:25.666 ********* 2025-07-05 22:41:35.417232 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:35.417242 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.417252 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.417263 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.417273 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.417283 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.417293 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.417304 | orchestrator | 2025-07-05 22:41:35.417314 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-07-05 22:41:35.417325 | orchestrator | Saturday 05 July 2025 22:40:53 +0000 (0:00:07.434) 0:05:33.101 ********* 2025-07-05 22:41:35.417335 | orchestrator | changed: [testbed-manager] 2025-07-05 22:41:35.417345 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.417355 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.417366 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.417377 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.417387 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.417398 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.417408 | orchestrator | 2025-07-05 22:41:35.417419 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-07-05 22:41:35.417429 | orchestrator | Saturday 05 July 2025 22:40:56 +0000 (0:00:03.103) 0:05:36.204 ********* 2025-07-05 22:41:35.417440 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:35.417450 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.417461 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.417471 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.417481 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.417491 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.417502 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.417512 | orchestrator | 2025-07-05 22:41:35.417522 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-07-05 22:41:35.417533 | orchestrator | Saturday 05 July 2025 22:40:57 +0000 (0:00:01.533) 0:05:37.738 ********* 2025-07-05 22:41:35.417543 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:35.417568 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.417578 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.417589 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.417599 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.417609 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.417620 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.417630 | orchestrator | 2025-07-05 22:41:35.417662 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-07-05 22:41:35.417673 | orchestrator | Saturday 05 July 2025 22:40:59 +0000 (0:00:01.346) 0:05:39.085 ********* 2025-07-05 22:41:35.417683 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:41:35.417693 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:41:35.417704 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:41:35.417724 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:41:35.417734 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:41:35.417744 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:41:35.417754 | orchestrator | changed: [testbed-manager] 2025-07-05 22:41:35.417765 | orchestrator | 2025-07-05 22:41:35.417775 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-07-05 22:41:35.417786 | orchestrator | Saturday 05 July 2025 22:40:59 +0000 (0:00:00.551) 0:05:39.637 ********* 2025-07-05 22:41:35.417796 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:35.417807 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.417817 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.417827 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.417837 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.417848 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.417858 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.417868 | orchestrator | 2025-07-05 22:41:35.417879 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-07-05 22:41:35.417889 | orchestrator | Saturday 05 July 2025 22:41:08 +0000 (0:00:09.231) 0:05:48.868 ********* 2025-07-05 22:41:35.417900 | orchestrator | changed: [testbed-manager] 2025-07-05 22:41:35.417910 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.417941 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.417952 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.417963 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.417973 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.417983 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.417994 | orchestrator | 2025-07-05 22:41:35.418004 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-07-05 22:41:35.418071 | orchestrator | Saturday 05 July 2025 22:41:09 +0000 (0:00:01.063) 0:05:49.932 ********* 2025-07-05 22:41:35.418084 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:35.418095 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.418105 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.418116 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.418126 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.418136 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.418147 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.418157 | orchestrator | 2025-07-05 22:41:35.418168 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-07-05 22:41:35.418178 | orchestrator | Saturday 05 July 2025 22:41:18 +0000 (0:00:08.673) 0:05:58.606 ********* 2025-07-05 22:41:35.418189 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:35.418199 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.418209 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.418220 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.418230 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.418240 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.418251 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.418262 | orchestrator | 2025-07-05 22:41:35.418272 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-07-05 22:41:35.418283 | orchestrator | Saturday 05 July 2025 22:41:29 +0000 (0:00:10.431) 0:06:09.037 ********* 2025-07-05 22:41:35.418293 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-07-05 22:41:35.418304 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-07-05 22:41:35.418318 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-07-05 22:41:35.418336 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-07-05 22:41:35.418356 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-07-05 22:41:35.418373 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-07-05 22:41:35.418389 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-07-05 22:41:35.418405 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-07-05 22:41:35.418422 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-07-05 22:41:35.418450 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-07-05 22:41:35.418467 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-07-05 22:41:35.418483 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-07-05 22:41:35.418502 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-07-05 22:41:35.418521 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-07-05 22:41:35.418541 | orchestrator | 2025-07-05 22:41:35.418560 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-07-05 22:41:35.418580 | orchestrator | Saturday 05 July 2025 22:41:30 +0000 (0:00:01.214) 0:06:10.252 ********* 2025-07-05 22:41:35.418591 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:41:35.418601 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:41:35.418612 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:41:35.418622 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:41:35.418668 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:41:35.418682 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:41:35.418693 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:41:35.418703 | orchestrator | 2025-07-05 22:41:35.418714 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-07-05 22:41:35.418725 | orchestrator | Saturday 05 July 2025 22:41:30 +0000 (0:00:00.508) 0:06:10.760 ********* 2025-07-05 22:41:35.418735 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:35.418746 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:35.418756 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:35.418767 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:35.418777 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:35.418791 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:35.418818 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:35.418837 | orchestrator | 2025-07-05 22:41:35.418854 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-07-05 22:41:35.418873 | orchestrator | Saturday 05 July 2025 22:41:34 +0000 (0:00:03.693) 0:06:14.453 ********* 2025-07-05 22:41:35.418890 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:41:35.418907 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:41:35.418926 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:41:35.418944 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:41:35.418962 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:41:35.418973 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:41:35.418983 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:41:35.418994 | orchestrator | 2025-07-05 22:41:35.419005 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-07-05 22:41:35.419016 | orchestrator | Saturday 05 July 2025 22:41:35 +0000 (0:00:00.525) 0:06:14.979 ********* 2025-07-05 22:41:35.419026 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-07-05 22:41:35.419036 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-07-05 22:41:35.419047 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:41:35.419057 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-07-05 22:41:35.419067 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-07-05 22:41:35.419078 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:41:35.419088 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-07-05 22:41:35.419098 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-07-05 22:41:35.419109 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:41:35.419119 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-07-05 22:41:35.419130 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-07-05 22:41:35.419152 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:41:54.079322 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-07-05 22:41:54.079438 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-07-05 22:41:54.079490 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:41:54.079503 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-07-05 22:41:54.079513 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-07-05 22:41:54.079522 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:41:54.079532 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-07-05 22:41:54.079541 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-07-05 22:41:54.079550 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:41:54.079561 | orchestrator | 2025-07-05 22:41:54.079572 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-07-05 22:41:54.079582 | orchestrator | Saturday 05 July 2025 22:41:35 +0000 (0:00:00.548) 0:06:15.527 ********* 2025-07-05 22:41:54.079592 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:41:54.079601 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:41:54.079610 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:41:54.079677 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:41:54.079687 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:41:54.079697 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:41:54.079706 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:41:54.079716 | orchestrator | 2025-07-05 22:41:54.079739 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-07-05 22:41:54.079750 | orchestrator | Saturday 05 July 2025 22:41:36 +0000 (0:00:00.540) 0:06:16.068 ********* 2025-07-05 22:41:54.079770 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:41:54.079780 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:41:54.079790 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:41:54.079799 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:41:54.079809 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:41:54.079818 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:41:54.079828 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:41:54.079837 | orchestrator | 2025-07-05 22:41:54.079847 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-07-05 22:41:54.079858 | orchestrator | Saturday 05 July 2025 22:41:36 +0000 (0:00:00.546) 0:06:16.614 ********* 2025-07-05 22:41:54.079869 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:41:54.079881 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:41:54.079893 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:41:54.079904 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:41:54.079915 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:41:54.079927 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:41:54.079938 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:41:54.079949 | orchestrator | 2025-07-05 22:41:54.079960 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-07-05 22:41:54.079972 | orchestrator | Saturday 05 July 2025 22:41:37 +0000 (0:00:00.665) 0:06:17.279 ********* 2025-07-05 22:41:54.079983 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:54.079995 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:41:54.080006 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:41:54.080017 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:41:54.080028 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:41:54.080039 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:41:54.080049 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:41:54.080061 | orchestrator | 2025-07-05 22:41:54.080072 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-07-05 22:41:54.080084 | orchestrator | Saturday 05 July 2025 22:41:38 +0000 (0:00:01.643) 0:06:18.922 ********* 2025-07-05 22:41:54.080096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:41:54.080111 | orchestrator | 2025-07-05 22:41:54.080121 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-07-05 22:41:54.080138 | orchestrator | Saturday 05 July 2025 22:41:39 +0000 (0:00:00.903) 0:06:19.826 ********* 2025-07-05 22:41:54.080148 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:54.080158 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:54.080167 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:54.080177 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:54.080186 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:54.080196 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:54.080205 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:54.080215 | orchestrator | 2025-07-05 22:41:54.080224 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-07-05 22:41:54.080234 | orchestrator | Saturday 05 July 2025 22:41:40 +0000 (0:00:00.858) 0:06:20.685 ********* 2025-07-05 22:41:54.080244 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:54.080253 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:54.080262 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:54.080272 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:54.080281 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:54.080291 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:54.080300 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:54.080310 | orchestrator | 2025-07-05 22:41:54.080319 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-07-05 22:41:54.080347 | orchestrator | Saturday 05 July 2025 22:41:41 +0000 (0:00:01.044) 0:06:21.729 ********* 2025-07-05 22:41:54.080357 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:54.080366 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:54.080375 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:54.080385 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:54.080394 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:54.080403 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:54.080413 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:54.080422 | orchestrator | 2025-07-05 22:41:54.080432 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-07-05 22:41:54.080441 | orchestrator | Saturday 05 July 2025 22:41:43 +0000 (0:00:01.270) 0:06:23.000 ********* 2025-07-05 22:41:54.080451 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:41:54.080478 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:41:54.080488 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:41:54.080498 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:41:54.080507 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:41:54.080517 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:41:54.080526 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:41:54.080536 | orchestrator | 2025-07-05 22:41:54.080545 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-07-05 22:41:54.080555 | orchestrator | Saturday 05 July 2025 22:41:44 +0000 (0:00:01.282) 0:06:24.283 ********* 2025-07-05 22:41:54.080565 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:54.080574 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:54.080584 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:54.080593 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:54.080602 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:54.080638 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:54.080648 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:54.080658 | orchestrator | 2025-07-05 22:41:54.080667 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-07-05 22:41:54.080677 | orchestrator | Saturday 05 July 2025 22:41:45 +0000 (0:00:01.235) 0:06:25.519 ********* 2025-07-05 22:41:54.080686 | orchestrator | changed: [testbed-manager] 2025-07-05 22:41:54.080696 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:41:54.080705 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:41:54.080714 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:41:54.080724 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:41:54.080734 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:41:54.080743 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:41:54.080762 | orchestrator | 2025-07-05 22:41:54.080772 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-07-05 22:41:54.080781 | orchestrator | Saturday 05 July 2025 22:41:47 +0000 (0:00:01.570) 0:06:27.089 ********* 2025-07-05 22:41:54.080791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:41:54.080801 | orchestrator | 2025-07-05 22:41:54.080811 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-07-05 22:41:54.080821 | orchestrator | Saturday 05 July 2025 22:41:47 +0000 (0:00:00.817) 0:06:27.907 ********* 2025-07-05 22:41:54.080830 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:54.080840 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:41:54.080849 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:41:54.080859 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:41:54.080868 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:41:54.080878 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:41:54.080887 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:41:54.080896 | orchestrator | 2025-07-05 22:41:54.080906 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-07-05 22:41:54.080916 | orchestrator | Saturday 05 July 2025 22:41:49 +0000 (0:00:01.275) 0:06:29.183 ********* 2025-07-05 22:41:54.080925 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:54.080935 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:41:54.080944 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:41:54.080954 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:41:54.080963 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:41:54.080972 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:41:54.080982 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:41:54.080991 | orchestrator | 2025-07-05 22:41:54.081001 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-07-05 22:41:54.081010 | orchestrator | Saturday 05 July 2025 22:41:50 +0000 (0:00:01.105) 0:06:30.288 ********* 2025-07-05 22:41:54.081020 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:54.081029 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:41:54.081039 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:41:54.081048 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:41:54.081058 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:41:54.081067 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:41:54.081076 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:41:54.081086 | orchestrator | 2025-07-05 22:41:54.081096 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-07-05 22:41:54.081105 | orchestrator | Saturday 05 July 2025 22:41:51 +0000 (0:00:01.370) 0:06:31.659 ********* 2025-07-05 22:41:54.081115 | orchestrator | ok: [testbed-manager] 2025-07-05 22:41:54.081129 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:41:54.081139 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:41:54.081148 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:41:54.081158 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:41:54.081167 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:41:54.081177 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:41:54.081186 | orchestrator | 2025-07-05 22:41:54.081196 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-07-05 22:41:54.081205 | orchestrator | Saturday 05 July 2025 22:41:52 +0000 (0:00:01.163) 0:06:32.822 ********* 2025-07-05 22:41:54.081215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:41:54.081225 | orchestrator | 2025-07-05 22:41:54.081234 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-05 22:41:54.081244 | orchestrator | Saturday 05 July 2025 22:41:53 +0000 (0:00:00.875) 0:06:33.697 ********* 2025-07-05 22:41:54.081254 | orchestrator | 2025-07-05 22:41:54.081263 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-05 22:41:54.081280 | orchestrator | Saturday 05 July 2025 22:41:53 +0000 (0:00:00.045) 0:06:33.743 ********* 2025-07-05 22:41:54.081289 | orchestrator | 2025-07-05 22:41:54.081299 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-05 22:41:54.081308 | orchestrator | Saturday 05 July 2025 22:41:53 +0000 (0:00:00.039) 0:06:33.782 ********* 2025-07-05 22:41:54.081318 | orchestrator | 2025-07-05 22:41:54.081327 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-05 22:41:54.081337 | orchestrator | Saturday 05 July 2025 22:41:53 +0000 (0:00:00.038) 0:06:33.821 ********* 2025-07-05 22:41:54.081346 | orchestrator | 2025-07-05 22:41:54.081356 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-05 22:41:54.081371 | orchestrator | Saturday 05 July 2025 22:41:53 +0000 (0:00:00.045) 0:06:33.866 ********* 2025-07-05 22:42:19.083437 | orchestrator | 2025-07-05 22:42:19.083639 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-05 22:42:19.083662 | orchestrator | Saturday 05 July 2025 22:41:53 +0000 (0:00:00.038) 0:06:33.905 ********* 2025-07-05 22:42:19.083675 | orchestrator | 2025-07-05 22:42:19.083687 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-05 22:42:19.083699 | orchestrator | Saturday 05 July 2025 22:41:54 +0000 (0:00:00.038) 0:06:33.944 ********* 2025-07-05 22:42:19.083711 | orchestrator | 2025-07-05 22:42:19.083722 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-05 22:42:19.083734 | orchestrator | Saturday 05 July 2025 22:41:54 +0000 (0:00:00.049) 0:06:33.994 ********* 2025-07-05 22:42:19.083745 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:19.083758 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:19.083769 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:19.083780 | orchestrator | 2025-07-05 22:42:19.083791 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-07-05 22:42:19.083803 | orchestrator | Saturday 05 July 2025 22:41:55 +0000 (0:00:01.358) 0:06:35.352 ********* 2025-07-05 22:42:19.083814 | orchestrator | changed: [testbed-manager] 2025-07-05 22:42:19.083826 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:42:19.083837 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:42:19.083848 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:42:19.083859 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:42:19.083870 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:42:19.083882 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:42:19.083893 | orchestrator | 2025-07-05 22:42:19.083904 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-07-05 22:42:19.083915 | orchestrator | Saturday 05 July 2025 22:41:56 +0000 (0:00:01.329) 0:06:36.681 ********* 2025-07-05 22:42:19.083927 | orchestrator | changed: [testbed-manager] 2025-07-05 22:42:19.083939 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:42:19.083951 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:42:19.083965 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:42:19.083979 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:42:19.083992 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:42:19.084006 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:42:19.084019 | orchestrator | 2025-07-05 22:42:19.084033 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-07-05 22:42:19.084048 | orchestrator | Saturday 05 July 2025 22:41:57 +0000 (0:00:01.091) 0:06:37.773 ********* 2025-07-05 22:42:19.084062 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:42:19.084075 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:42:19.084089 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:42:19.084102 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:42:19.084116 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:42:19.084129 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:42:19.084143 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:42:19.084156 | orchestrator | 2025-07-05 22:42:19.084170 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-07-05 22:42:19.084209 | orchestrator | Saturday 05 July 2025 22:42:00 +0000 (0:00:02.229) 0:06:40.002 ********* 2025-07-05 22:42:19.084223 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:42:19.084236 | orchestrator | 2025-07-05 22:42:19.084250 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-07-05 22:42:19.084264 | orchestrator | Saturday 05 July 2025 22:42:00 +0000 (0:00:00.100) 0:06:40.103 ********* 2025-07-05 22:42:19.084277 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:19.084290 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:42:19.084303 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:42:19.084317 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:42:19.084330 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:42:19.084344 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:42:19.084357 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:42:19.084368 | orchestrator | 2025-07-05 22:42:19.084380 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-07-05 22:42:19.084392 | orchestrator | Saturday 05 July 2025 22:42:01 +0000 (0:00:01.021) 0:06:41.124 ********* 2025-07-05 22:42:19.084403 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:42:19.084429 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:42:19.084441 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:42:19.084452 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:42:19.084463 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:42:19.084475 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:42:19.084486 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:42:19.084498 | orchestrator | 2025-07-05 22:42:19.084509 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-07-05 22:42:19.084521 | orchestrator | Saturday 05 July 2025 22:42:01 +0000 (0:00:00.712) 0:06:41.837 ********* 2025-07-05 22:42:19.084533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:42:19.084548 | orchestrator | 2025-07-05 22:42:19.084559 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-07-05 22:42:19.084571 | orchestrator | Saturday 05 July 2025 22:42:02 +0000 (0:00:00.906) 0:06:42.744 ********* 2025-07-05 22:42:19.084583 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:19.084630 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:19.084652 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:19.084670 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:19.084688 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:19.084698 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:19.084709 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:19.084720 | orchestrator | 2025-07-05 22:42:19.084730 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-07-05 22:42:19.084741 | orchestrator | Saturday 05 July 2025 22:42:03 +0000 (0:00:00.820) 0:06:43.564 ********* 2025-07-05 22:42:19.084752 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-07-05 22:42:19.084763 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-07-05 22:42:19.084774 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-07-05 22:42:19.084803 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-07-05 22:42:19.084814 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-07-05 22:42:19.084825 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-07-05 22:42:19.084836 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-07-05 22:42:19.084846 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-07-05 22:42:19.084857 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-07-05 22:42:19.084867 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-07-05 22:42:19.084878 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-07-05 22:42:19.084888 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-07-05 22:42:19.084914 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-07-05 22:42:19.084926 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-07-05 22:42:19.084937 | orchestrator | 2025-07-05 22:42:19.084948 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-07-05 22:42:19.084958 | orchestrator | Saturday 05 July 2025 22:42:06 +0000 (0:00:02.619) 0:06:46.184 ********* 2025-07-05 22:42:19.084969 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:42:19.084980 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:42:19.084990 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:42:19.085001 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:42:19.085011 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:42:19.085022 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:42:19.085032 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:42:19.085043 | orchestrator | 2025-07-05 22:42:19.085054 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-07-05 22:42:19.085064 | orchestrator | Saturday 05 July 2025 22:42:06 +0000 (0:00:00.466) 0:06:46.651 ********* 2025-07-05 22:42:19.085077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:42:19.085090 | orchestrator | 2025-07-05 22:42:19.085101 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-07-05 22:42:19.085112 | orchestrator | Saturday 05 July 2025 22:42:07 +0000 (0:00:00.725) 0:06:47.376 ********* 2025-07-05 22:42:19.085122 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:19.085133 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:19.085143 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:19.085154 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:19.085164 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:19.085175 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:19.085185 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:19.085196 | orchestrator | 2025-07-05 22:42:19.085207 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-07-05 22:42:19.085217 | orchestrator | Saturday 05 July 2025 22:42:08 +0000 (0:00:00.927) 0:06:48.304 ********* 2025-07-05 22:42:19.085228 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:19.085238 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:19.085249 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:19.085259 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:19.085270 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:19.085280 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:19.085290 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:19.085301 | orchestrator | 2025-07-05 22:42:19.085312 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-07-05 22:42:19.085322 | orchestrator | Saturday 05 July 2025 22:42:09 +0000 (0:00:00.743) 0:06:49.047 ********* 2025-07-05 22:42:19.085333 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:42:19.085343 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:42:19.085354 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:42:19.085365 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:42:19.085375 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:42:19.085386 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:42:19.085396 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:42:19.085407 | orchestrator | 2025-07-05 22:42:19.085424 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-07-05 22:42:19.085435 | orchestrator | Saturday 05 July 2025 22:42:09 +0000 (0:00:00.465) 0:06:49.513 ********* 2025-07-05 22:42:19.085445 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:19.085456 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:19.085467 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:19.085477 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:19.085495 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:19.085506 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:19.085516 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:19.085527 | orchestrator | 2025-07-05 22:42:19.085538 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-07-05 22:42:19.085548 | orchestrator | Saturday 05 July 2025 22:42:11 +0000 (0:00:01.542) 0:06:51.055 ********* 2025-07-05 22:42:19.085559 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:42:19.085569 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:42:19.085580 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:42:19.085617 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:42:19.085628 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:42:19.085639 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:42:19.085649 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:42:19.085660 | orchestrator | 2025-07-05 22:42:19.085670 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-07-05 22:42:19.085681 | orchestrator | Saturday 05 July 2025 22:42:11 +0000 (0:00:00.521) 0:06:51.576 ********* 2025-07-05 22:42:19.085691 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:19.085702 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:42:19.085713 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:42:19.085723 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:42:19.085733 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:42:19.085743 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:42:19.085754 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:42:19.085764 | orchestrator | 2025-07-05 22:42:19.085775 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-07-05 22:42:19.085792 | orchestrator | Saturday 05 July 2025 22:42:19 +0000 (0:00:07.425) 0:06:59.002 ********* 2025-07-05 22:42:51.848358 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.848498 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:42:51.848515 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:42:51.848527 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:42:51.848547 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:42:51.848606 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:42:51.848688 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:42:51.848707 | orchestrator | 2025-07-05 22:42:51.848720 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-07-05 22:42:51.848732 | orchestrator | Saturday 05 July 2025 22:42:20 +0000 (0:00:01.375) 0:07:00.377 ********* 2025-07-05 22:42:51.848743 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.848754 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:42:51.848764 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:42:51.848775 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:42:51.848785 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:42:51.848796 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:42:51.848807 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:42:51.848817 | orchestrator | 2025-07-05 22:42:51.848828 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-07-05 22:42:51.848840 | orchestrator | Saturday 05 July 2025 22:42:22 +0000 (0:00:01.651) 0:07:02.029 ********* 2025-07-05 22:42:51.848850 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.848861 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:42:51.848871 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:42:51.848882 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:42:51.848895 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:42:51.848907 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:42:51.848919 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:42:51.848932 | orchestrator | 2025-07-05 22:42:51.848944 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-05 22:42:51.848956 | orchestrator | Saturday 05 July 2025 22:42:23 +0000 (0:00:01.797) 0:07:03.826 ********* 2025-07-05 22:42:51.848969 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.848982 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:51.849017 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:51.849030 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:51.849043 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:51.849055 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:51.849067 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:51.849079 | orchestrator | 2025-07-05 22:42:51.849092 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-05 22:42:51.849104 | orchestrator | Saturday 05 July 2025 22:42:24 +0000 (0:00:00.860) 0:07:04.686 ********* 2025-07-05 22:42:51.849116 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:42:51.849129 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:42:51.849141 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:42:51.849153 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:42:51.849165 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:42:51.849177 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:42:51.849191 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:42:51.849203 | orchestrator | 2025-07-05 22:42:51.849214 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-07-05 22:42:51.849225 | orchestrator | Saturday 05 July 2025 22:42:25 +0000 (0:00:00.792) 0:07:05.479 ********* 2025-07-05 22:42:51.849235 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:42:51.849246 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:42:51.849256 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:42:51.849267 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:42:51.849277 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:42:51.849288 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:42:51.849298 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:42:51.849309 | orchestrator | 2025-07-05 22:42:51.849319 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-07-05 22:42:51.849330 | orchestrator | Saturday 05 July 2025 22:42:26 +0000 (0:00:00.519) 0:07:05.998 ********* 2025-07-05 22:42:51.849341 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.849351 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:51.849362 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:51.849372 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:51.849383 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:51.849393 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:51.849419 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:51.849430 | orchestrator | 2025-07-05 22:42:51.849440 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-07-05 22:42:51.849451 | orchestrator | Saturday 05 July 2025 22:42:26 +0000 (0:00:00.675) 0:07:06.674 ********* 2025-07-05 22:42:51.849462 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.849472 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:51.849483 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:51.849494 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:51.849504 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:51.849515 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:51.849526 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:51.849544 | orchestrator | 2025-07-05 22:42:51.849586 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-07-05 22:42:51.849604 | orchestrator | Saturday 05 July 2025 22:42:27 +0000 (0:00:00.552) 0:07:07.226 ********* 2025-07-05 22:42:51.849623 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.849640 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:51.849658 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:51.849670 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:51.849680 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:51.849690 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:51.849701 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:51.849711 | orchestrator | 2025-07-05 22:42:51.849722 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-07-05 22:42:51.849732 | orchestrator | Saturday 05 July 2025 22:42:27 +0000 (0:00:00.517) 0:07:07.744 ********* 2025-07-05 22:42:51.849743 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.849763 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:51.849774 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:51.849784 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:51.849794 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:51.849805 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:51.849815 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:51.849825 | orchestrator | 2025-07-05 22:42:51.849836 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-07-05 22:42:51.849847 | orchestrator | Saturday 05 July 2025 22:42:33 +0000 (0:00:05.434) 0:07:13.178 ********* 2025-07-05 22:42:51.849857 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:42:51.849887 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:42:51.849899 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:42:51.849910 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:42:51.849920 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:42:51.849930 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:42:51.849941 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:42:51.849952 | orchestrator | 2025-07-05 22:42:51.849962 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-07-05 22:42:51.849973 | orchestrator | Saturday 05 July 2025 22:42:33 +0000 (0:00:00.721) 0:07:13.900 ********* 2025-07-05 22:42:51.849986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:42:51.849999 | orchestrator | 2025-07-05 22:42:51.850010 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-07-05 22:42:51.850071 | orchestrator | Saturday 05 July 2025 22:42:34 +0000 (0:00:00.804) 0:07:14.704 ********* 2025-07-05 22:42:51.850083 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.850094 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:51.850104 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:51.850115 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:51.850126 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:51.850136 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:51.850147 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:51.850157 | orchestrator | 2025-07-05 22:42:51.850168 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-07-05 22:42:51.850179 | orchestrator | Saturday 05 July 2025 22:42:36 +0000 (0:00:01.830) 0:07:16.535 ********* 2025-07-05 22:42:51.850189 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.850200 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:51.850210 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:51.850221 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:51.850231 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:51.850242 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:51.850252 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:51.850262 | orchestrator | 2025-07-05 22:42:51.850273 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-07-05 22:42:51.850284 | orchestrator | Saturday 05 July 2025 22:42:37 +0000 (0:00:01.132) 0:07:17.667 ********* 2025-07-05 22:42:51.850294 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.850305 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:51.850315 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:42:51.850326 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:42:51.850336 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:42:51.850347 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:42:51.850357 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:42:51.850368 | orchestrator | 2025-07-05 22:42:51.850378 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-07-05 22:42:51.850389 | orchestrator | Saturday 05 July 2025 22:42:38 +0000 (0:00:01.053) 0:07:18.721 ********* 2025-07-05 22:42:51.850400 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-05 22:42:51.850412 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-05 22:42:51.850431 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-05 22:42:51.850442 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-05 22:42:51.850453 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-05 22:42:51.850464 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-05 22:42:51.850474 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-05 22:42:51.850485 | orchestrator | 2025-07-05 22:42:51.850496 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-07-05 22:42:51.850506 | orchestrator | Saturday 05 July 2025 22:42:40 +0000 (0:00:01.796) 0:07:20.518 ********* 2025-07-05 22:42:51.850518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:42:51.850531 | orchestrator | 2025-07-05 22:42:51.850582 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-07-05 22:42:51.850603 | orchestrator | Saturday 05 July 2025 22:42:41 +0000 (0:00:00.804) 0:07:21.322 ********* 2025-07-05 22:42:51.850621 | orchestrator | changed: [testbed-manager] 2025-07-05 22:42:51.850639 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:42:51.850659 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:42:51.850722 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:42:51.850735 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:42:51.850746 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:42:51.850757 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:42:51.850767 | orchestrator | 2025-07-05 22:42:51.850778 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-07-05 22:42:51.850789 | orchestrator | Saturday 05 July 2025 22:42:50 +0000 (0:00:08.717) 0:07:30.039 ********* 2025-07-05 22:42:51.850800 | orchestrator | ok: [testbed-manager] 2025-07-05 22:42:51.850811 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:42:51.850831 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:06.085677 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:06.085794 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:06.085809 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:06.085821 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:06.085832 | orchestrator | 2025-07-05 22:43:06.085845 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-07-05 22:43:06.085859 | orchestrator | Saturday 05 July 2025 22:42:51 +0000 (0:00:01.730) 0:07:31.770 ********* 2025-07-05 22:43:06.085870 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:06.085881 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:06.085891 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:06.085902 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:06.085913 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:06.085923 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:06.085934 | orchestrator | 2025-07-05 22:43:06.085945 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-07-05 22:43:06.085956 | orchestrator | Saturday 05 July 2025 22:42:53 +0000 (0:00:01.261) 0:07:33.031 ********* 2025-07-05 22:43:06.085968 | orchestrator | changed: [testbed-manager] 2025-07-05 22:43:06.085980 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:43:06.085990 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:43:06.086001 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:43:06.086092 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:43:06.086107 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:43:06.086118 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:43:06.086129 | orchestrator | 2025-07-05 22:43:06.086140 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-07-05 22:43:06.086150 | orchestrator | 2025-07-05 22:43:06.086161 | orchestrator | TASK [Include hardening role] ************************************************** 2025-07-05 22:43:06.086172 | orchestrator | Saturday 05 July 2025 22:42:54 +0000 (0:00:01.436) 0:07:34.468 ********* 2025-07-05 22:43:06.086185 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:43:06.086198 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:43:06.086210 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:43:06.086224 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:43:06.086237 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:43:06.086250 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:43:06.086262 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:43:06.086275 | orchestrator | 2025-07-05 22:43:06.086288 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-07-05 22:43:06.086301 | orchestrator | 2025-07-05 22:43:06.086314 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-07-05 22:43:06.086327 | orchestrator | Saturday 05 July 2025 22:42:55 +0000 (0:00:00.529) 0:07:34.998 ********* 2025-07-05 22:43:06.086340 | orchestrator | changed: [testbed-manager] 2025-07-05 22:43:06.086353 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:43:06.086366 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:43:06.086379 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:43:06.086392 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:43:06.086404 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:43:06.086417 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:43:06.086430 | orchestrator | 2025-07-05 22:43:06.086443 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-07-05 22:43:06.086456 | orchestrator | Saturday 05 July 2025 22:42:56 +0000 (0:00:01.300) 0:07:36.298 ********* 2025-07-05 22:43:06.086468 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:06.086481 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:06.086494 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:06.086507 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:06.086519 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:06.086529 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:06.086562 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:06.086574 | orchestrator | 2025-07-05 22:43:06.086585 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-07-05 22:43:06.086596 | orchestrator | Saturday 05 July 2025 22:42:57 +0000 (0:00:01.612) 0:07:37.911 ********* 2025-07-05 22:43:06.086606 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:43:06.086617 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:43:06.086628 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:43:06.086639 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:43:06.086649 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:43:06.086675 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:43:06.086686 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:43:06.086697 | orchestrator | 2025-07-05 22:43:06.086708 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-07-05 22:43:06.086719 | orchestrator | Saturday 05 July 2025 22:42:58 +0000 (0:00:00.793) 0:07:38.704 ********* 2025-07-05 22:43:06.086730 | orchestrator | changed: [testbed-manager] 2025-07-05 22:43:06.086741 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:43:06.086751 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:43:06.086762 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:43:06.086772 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:43:06.086783 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:43:06.086794 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:43:06.086804 | orchestrator | 2025-07-05 22:43:06.086815 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-07-05 22:43:06.086835 | orchestrator | 2025-07-05 22:43:06.086846 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-07-05 22:43:06.086857 | orchestrator | Saturday 05 July 2025 22:43:00 +0000 (0:00:01.281) 0:07:39.985 ********* 2025-07-05 22:43:06.086868 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:43:06.086881 | orchestrator | 2025-07-05 22:43:06.086891 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-05 22:43:06.086902 | orchestrator | Saturday 05 July 2025 22:43:01 +0000 (0:00:00.964) 0:07:40.950 ********* 2025-07-05 22:43:06.086913 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:06.086924 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:06.086934 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:06.086945 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:06.086956 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:06.086966 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:06.086977 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:06.086988 | orchestrator | 2025-07-05 22:43:06.086999 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-05 22:43:06.087027 | orchestrator | Saturday 05 July 2025 22:43:01 +0000 (0:00:00.837) 0:07:41.787 ********* 2025-07-05 22:43:06.087039 | orchestrator | changed: [testbed-manager] 2025-07-05 22:43:06.087074 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:43:06.087086 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:43:06.087096 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:43:06.087107 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:43:06.087118 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:43:06.087129 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:43:06.087139 | orchestrator | 2025-07-05 22:43:06.087150 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-07-05 22:43:06.087161 | orchestrator | Saturday 05 July 2025 22:43:03 +0000 (0:00:01.251) 0:07:43.039 ********* 2025-07-05 22:43:06.087172 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:43:06.087183 | orchestrator | 2025-07-05 22:43:06.087194 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-05 22:43:06.087205 | orchestrator | Saturday 05 July 2025 22:43:04 +0000 (0:00:01.002) 0:07:44.041 ********* 2025-07-05 22:43:06.087215 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:06.087226 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:06.087237 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:06.087247 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:06.087258 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:06.087269 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:06.087279 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:06.087290 | orchestrator | 2025-07-05 22:43:06.087301 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-05 22:43:06.087311 | orchestrator | Saturday 05 July 2025 22:43:04 +0000 (0:00:00.815) 0:07:44.857 ********* 2025-07-05 22:43:06.087322 | orchestrator | changed: [testbed-manager] 2025-07-05 22:43:06.087333 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:43:06.087344 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:43:06.087355 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:43:06.087365 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:43:06.087376 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:43:06.087387 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:43:06.087397 | orchestrator | 2025-07-05 22:43:06.087408 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:43:06.087421 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-07-05 22:43:06.087432 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-05 22:43:06.087450 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-05 22:43:06.087461 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-05 22:43:06.087472 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-05 22:43:06.087483 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-05 22:43:06.087494 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-05 22:43:06.087505 | orchestrator | 2025-07-05 22:43:06.087515 | orchestrator | 2025-07-05 22:43:06.087527 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:43:06.087584 | orchestrator | Saturday 05 July 2025 22:43:06 +0000 (0:00:01.128) 0:07:45.985 ********* 2025-07-05 22:43:06.087596 | orchestrator | =============================================================================== 2025-07-05 22:43:06.087607 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.64s 2025-07-05 22:43:06.087618 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.87s 2025-07-05 22:43:06.087628 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.56s 2025-07-05 22:43:06.087639 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.72s 2025-07-05 22:43:06.087650 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.91s 2025-07-05 22:43:06.087662 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.61s 2025-07-05 22:43:06.087672 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.43s 2025-07-05 22:43:06.087683 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.23s 2025-07-05 22:43:06.087694 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.72s 2025-07-05 22:43:06.087705 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.67s 2025-07-05 22:43:06.087716 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.17s 2025-07-05 22:43:06.087726 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.97s 2025-07-05 22:43:06.087737 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.66s 2025-07-05 22:43:06.087748 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.43s 2025-07-05 22:43:06.087759 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.43s 2025-07-05 22:43:06.087776 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.31s 2025-07-05 22:43:06.536245 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.52s 2025-07-05 22:43:06.536348 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.96s 2025-07-05 22:43:06.536364 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.62s 2025-07-05 22:43:06.536376 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.43s 2025-07-05 22:43:06.831470 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-05 22:43:06.831621 | orchestrator | + osism apply network 2025-07-05 22:43:19.256154 | orchestrator | 2025-07-05 22:43:19 | INFO  | Task d8bc01ef-681b-4220-b1df-e19068eba4c6 (network) was prepared for execution. 2025-07-05 22:43:19.256273 | orchestrator | 2025-07-05 22:43:19 | INFO  | It takes a moment until task d8bc01ef-681b-4220-b1df-e19068eba4c6 (network) has been started and output is visible here. 2025-07-05 22:43:48.352636 | orchestrator | 2025-07-05 22:43:48.352756 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-07-05 22:43:48.352774 | orchestrator | 2025-07-05 22:43:48.352786 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-07-05 22:43:48.352797 | orchestrator | Saturday 05 July 2025 22:43:23 +0000 (0:00:00.270) 0:00:00.270 ********* 2025-07-05 22:43:48.352809 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:48.352820 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:48.352831 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:48.352842 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:48.352853 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:48.352863 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:48.352874 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:48.352884 | orchestrator | 2025-07-05 22:43:48.352895 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-07-05 22:43:48.352906 | orchestrator | Saturday 05 July 2025 22:43:24 +0000 (0:00:00.778) 0:00:01.049 ********* 2025-07-05 22:43:48.352919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:43:48.352932 | orchestrator | 2025-07-05 22:43:48.352943 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-07-05 22:43:48.352954 | orchestrator | Saturday 05 July 2025 22:43:25 +0000 (0:00:01.245) 0:00:02.294 ********* 2025-07-05 22:43:48.352966 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:48.352977 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:48.352987 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:48.352998 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:48.353008 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:48.353019 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:48.353030 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:48.353040 | orchestrator | 2025-07-05 22:43:48.353051 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-07-05 22:43:48.353062 | orchestrator | Saturday 05 July 2025 22:43:27 +0000 (0:00:01.937) 0:00:04.232 ********* 2025-07-05 22:43:48.353072 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:48.353083 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:48.353094 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:48.353107 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:48.353118 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:48.353130 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:48.353142 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:48.353154 | orchestrator | 2025-07-05 22:43:48.353166 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-07-05 22:43:48.353179 | orchestrator | Saturday 05 July 2025 22:43:29 +0000 (0:00:01.702) 0:00:05.935 ********* 2025-07-05 22:43:48.353191 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-07-05 22:43:48.353204 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-07-05 22:43:48.353233 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-07-05 22:43:48.353244 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-07-05 22:43:48.353255 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-07-05 22:43:48.353266 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-07-05 22:43:48.353276 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-07-05 22:43:48.353287 | orchestrator | 2025-07-05 22:43:48.353298 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-07-05 22:43:48.353308 | orchestrator | Saturday 05 July 2025 22:43:30 +0000 (0:00:00.990) 0:00:06.925 ********* 2025-07-05 22:43:48.353319 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 22:43:48.353330 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-05 22:43:48.353341 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-05 22:43:48.353374 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 22:43:48.353386 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-05 22:43:48.353396 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-05 22:43:48.353407 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-05 22:43:48.353417 | orchestrator | 2025-07-05 22:43:48.353428 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-07-05 22:43:48.353438 | orchestrator | Saturday 05 July 2025 22:43:33 +0000 (0:00:03.450) 0:00:10.376 ********* 2025-07-05 22:43:48.353449 | orchestrator | changed: [testbed-manager] 2025-07-05 22:43:48.353459 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:43:48.353469 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:43:48.353480 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:43:48.353490 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:43:48.353523 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:43:48.353534 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:43:48.353545 | orchestrator | 2025-07-05 22:43:48.353556 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-07-05 22:43:48.353566 | orchestrator | Saturday 05 July 2025 22:43:35 +0000 (0:00:01.478) 0:00:11.855 ********* 2025-07-05 22:43:48.353577 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 22:43:48.353588 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 22:43:48.353599 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-05 22:43:48.353610 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-05 22:43:48.353620 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-05 22:43:48.353631 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-05 22:43:48.353642 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-05 22:43:48.353652 | orchestrator | 2025-07-05 22:43:48.353663 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-07-05 22:43:48.353674 | orchestrator | Saturday 05 July 2025 22:43:36 +0000 (0:00:01.872) 0:00:13.727 ********* 2025-07-05 22:43:48.353684 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:48.353695 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:48.353706 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:48.353716 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:48.353727 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:48.353737 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:48.353748 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:48.353759 | orchestrator | 2025-07-05 22:43:48.353770 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-07-05 22:43:48.353798 | orchestrator | Saturday 05 July 2025 22:43:38 +0000 (0:00:01.118) 0:00:14.846 ********* 2025-07-05 22:43:48.353810 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:43:48.353821 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:43:48.353831 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:43:48.353842 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:43:48.353853 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:43:48.353863 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:43:48.353874 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:43:48.353885 | orchestrator | 2025-07-05 22:43:48.353896 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-07-05 22:43:48.353906 | orchestrator | Saturday 05 July 2025 22:43:38 +0000 (0:00:00.670) 0:00:15.516 ********* 2025-07-05 22:43:48.353917 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:48.353928 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:48.353938 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:48.353949 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:48.353960 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:48.353970 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:48.353981 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:48.353992 | orchestrator | 2025-07-05 22:43:48.354003 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-07-05 22:43:48.354074 | orchestrator | Saturday 05 July 2025 22:43:40 +0000 (0:00:02.202) 0:00:17.719 ********* 2025-07-05 22:43:48.354099 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:43:48.354122 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:43:48.354140 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:43:48.354158 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:43:48.354176 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:43:48.354191 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:43:48.354209 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-07-05 22:43:48.354228 | orchestrator | 2025-07-05 22:43:48.354245 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-07-05 22:43:48.354264 | orchestrator | Saturday 05 July 2025 22:43:41 +0000 (0:00:00.865) 0:00:18.584 ********* 2025-07-05 22:43:48.354279 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:48.354289 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:43:48.354300 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:43:48.354311 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:43:48.354321 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:43:48.354332 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:43:48.354342 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:43:48.354353 | orchestrator | 2025-07-05 22:43:48.354364 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-07-05 22:43:48.354374 | orchestrator | Saturday 05 July 2025 22:43:43 +0000 (0:00:01.687) 0:00:20.272 ********* 2025-07-05 22:43:48.354393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:43:48.354407 | orchestrator | 2025-07-05 22:43:48.354418 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-05 22:43:48.354428 | orchestrator | Saturday 05 July 2025 22:43:44 +0000 (0:00:01.228) 0:00:21.501 ********* 2025-07-05 22:43:48.354439 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:48.354450 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:48.354460 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:48.354471 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:48.354481 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:48.354512 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:48.354523 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:48.354534 | orchestrator | 2025-07-05 22:43:48.354545 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-07-05 22:43:48.354556 | orchestrator | Saturday 05 July 2025 22:43:46 +0000 (0:00:01.619) 0:00:23.120 ********* 2025-07-05 22:43:48.354567 | orchestrator | ok: [testbed-manager] 2025-07-05 22:43:48.354577 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:43:48.354588 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:43:48.354598 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:43:48.354609 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:43:48.354619 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:43:48.354630 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:43:48.354640 | orchestrator | 2025-07-05 22:43:48.354651 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-05 22:43:48.354662 | orchestrator | Saturday 05 July 2025 22:43:47 +0000 (0:00:00.832) 0:00:23.953 ********* 2025-07-05 22:43:48.354672 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-05 22:43:48.354683 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-07-05 22:43:48.354694 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-05 22:43:48.354705 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-07-05 22:43:48.354716 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-05 22:43:48.354726 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-07-05 22:43:48.354737 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-05 22:43:48.354757 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-07-05 22:43:48.354767 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-05 22:43:48.354778 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-07-05 22:43:48.354788 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-05 22:43:48.354799 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-07-05 22:43:48.354809 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-05 22:43:48.354820 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-07-05 22:43:48.354831 | orchestrator | 2025-07-05 22:43:48.354851 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-07-05 22:44:05.292721 | orchestrator | Saturday 05 July 2025 22:43:48 +0000 (0:00:01.173) 0:00:25.127 ********* 2025-07-05 22:44:05.292861 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:44:05.292878 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:44:05.292890 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:44:05.292901 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:44:05.292912 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:44:05.292923 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:44:05.292934 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:44:05.292945 | orchestrator | 2025-07-05 22:44:05.292957 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-07-05 22:44:05.292969 | orchestrator | Saturday 05 July 2025 22:43:48 +0000 (0:00:00.646) 0:00:25.773 ********* 2025-07-05 22:44:05.292983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-5, testbed-node-4 2025-07-05 22:44:05.292998 | orchestrator | 2025-07-05 22:44:05.293009 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-07-05 22:44:05.293020 | orchestrator | Saturday 05 July 2025 22:43:53 +0000 (0:00:04.529) 0:00:30.302 ********* 2025-07-05 22:44:05.293034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293047 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293126 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293274 | orchestrator | 2025-07-05 22:44:05.293287 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-07-05 22:44:05.293301 | orchestrator | Saturday 05 July 2025 22:43:59 +0000 (0:00:05.870) 0:00:36.172 ********* 2025-07-05 22:44:05.293314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293341 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-05 22:44:05.293464 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:05.293544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:10.750365 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-05 22:44:10.750523 | orchestrator | 2025-07-05 22:44:10.750540 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-07-05 22:44:10.750552 | orchestrator | Saturday 05 July 2025 22:44:05 +0000 (0:00:05.895) 0:00:42.068 ********* 2025-07-05 22:44:10.750565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:44:10.750575 | orchestrator | 2025-07-05 22:44:10.750586 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-05 22:44:10.750596 | orchestrator | Saturday 05 July 2025 22:44:06 +0000 (0:00:01.103) 0:00:43.172 ********* 2025-07-05 22:44:10.750605 | orchestrator | ok: [testbed-manager] 2025-07-05 22:44:10.750616 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:44:10.750626 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:44:10.750635 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:44:10.750645 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:44:10.750654 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:44:10.750663 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:44:10.750673 | orchestrator | 2025-07-05 22:44:10.750683 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-05 22:44:10.750692 | orchestrator | Saturday 05 July 2025 22:44:07 +0000 (0:00:01.024) 0:00:44.197 ********* 2025-07-05 22:44:10.750724 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-05 22:44:10.750734 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-05 22:44:10.750744 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-05 22:44:10.750753 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-05 22:44:10.750762 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-05 22:44:10.750787 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-05 22:44:10.750797 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-05 22:44:10.750808 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:44:10.750819 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-05 22:44:10.750829 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-05 22:44:10.750838 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-05 22:44:10.750848 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-05 22:44:10.750857 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-05 22:44:10.750867 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:44:10.750876 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-05 22:44:10.750888 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-05 22:44:10.750904 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-05 22:44:10.750920 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-05 22:44:10.750937 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:44:10.750954 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-05 22:44:10.750971 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-05 22:44:10.750988 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-05 22:44:10.751000 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-05 22:44:10.751012 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:44:10.751023 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-05 22:44:10.751034 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-05 22:44:10.751044 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-05 22:44:10.751055 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-05 22:44:10.751067 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:44:10.751078 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:44:10.751088 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-05 22:44:10.751100 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-05 22:44:10.751111 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-05 22:44:10.751121 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-05 22:44:10.751132 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:44:10.751143 | orchestrator | 2025-07-05 22:44:10.751155 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-07-05 22:44:10.751190 | orchestrator | Saturday 05 July 2025 22:44:09 +0000 (0:00:01.794) 0:00:45.991 ********* 2025-07-05 22:44:10.751209 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:44:10.751238 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:44:10.751250 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:44:10.751259 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:44:10.751269 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:44:10.751278 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:44:10.751288 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:44:10.751297 | orchestrator | 2025-07-05 22:44:10.751307 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-07-05 22:44:10.751316 | orchestrator | Saturday 05 July 2025 22:44:09 +0000 (0:00:00.595) 0:00:46.587 ********* 2025-07-05 22:44:10.751326 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:44:10.751335 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:44:10.751345 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:44:10.751354 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:44:10.751364 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:44:10.751373 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:44:10.751382 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:44:10.751392 | orchestrator | 2025-07-05 22:44:10.751401 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:44:10.751412 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-05 22:44:10.751423 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:44:10.751433 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:44:10.751443 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:44:10.751452 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:44:10.751467 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:44:10.751503 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 22:44:10.751513 | orchestrator | 2025-07-05 22:44:10.751522 | orchestrator | 2025-07-05 22:44:10.751532 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:44:10.751542 | orchestrator | Saturday 05 July 2025 22:44:10 +0000 (0:00:00.629) 0:00:47.216 ********* 2025-07-05 22:44:10.751551 | orchestrator | =============================================================================== 2025-07-05 22:44:10.751560 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.90s 2025-07-05 22:44:10.751570 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.87s 2025-07-05 22:44:10.751579 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.53s 2025-07-05 22:44:10.751588 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.45s 2025-07-05 22:44:10.751598 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.20s 2025-07-05 22:44:10.751607 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.94s 2025-07-05 22:44:10.751617 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.87s 2025-07-05 22:44:10.751626 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.79s 2025-07-05 22:44:10.751635 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.70s 2025-07-05 22:44:10.751645 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.69s 2025-07-05 22:44:10.751654 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.62s 2025-07-05 22:44:10.751670 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2025-07-05 22:44:10.751679 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.25s 2025-07-05 22:44:10.751689 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.23s 2025-07-05 22:44:10.751698 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.17s 2025-07-05 22:44:10.751707 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.12s 2025-07-05 22:44:10.751717 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2025-07-05 22:44:10.751726 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.02s 2025-07-05 22:44:10.751736 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-07-05 22:44:10.751745 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.87s 2025-07-05 22:44:11.002326 | orchestrator | + osism apply wireguard 2025-07-05 22:44:22.930289 | orchestrator | 2025-07-05 22:44:22 | INFO  | Task 72308721-5ffb-4631-806e-bd2e3f8a1691 (wireguard) was prepared for execution. 2025-07-05 22:44:22.930423 | orchestrator | 2025-07-05 22:44:22 | INFO  | It takes a moment until task 72308721-5ffb-4631-806e-bd2e3f8a1691 (wireguard) has been started and output is visible here. 2025-07-05 22:44:41.349846 | orchestrator | 2025-07-05 22:44:41.349969 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-07-05 22:44:41.349986 | orchestrator | 2025-07-05 22:44:41.349998 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-07-05 22:44:41.350010 | orchestrator | Saturday 05 July 2025 22:44:26 +0000 (0:00:00.232) 0:00:00.232 ********* 2025-07-05 22:44:41.350082 | orchestrator | ok: [testbed-manager] 2025-07-05 22:44:41.350095 | orchestrator | 2025-07-05 22:44:41.350106 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-07-05 22:44:41.350117 | orchestrator | Saturday 05 July 2025 22:44:28 +0000 (0:00:01.426) 0:00:01.659 ********* 2025-07-05 22:44:41.350128 | orchestrator | changed: [testbed-manager] 2025-07-05 22:44:41.350140 | orchestrator | 2025-07-05 22:44:41.350151 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-07-05 22:44:41.350162 | orchestrator | Saturday 05 July 2025 22:44:34 +0000 (0:00:06.057) 0:00:07.717 ********* 2025-07-05 22:44:41.350173 | orchestrator | changed: [testbed-manager] 2025-07-05 22:44:41.350184 | orchestrator | 2025-07-05 22:44:41.350195 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-07-05 22:44:41.350206 | orchestrator | Saturday 05 July 2025 22:44:34 +0000 (0:00:00.536) 0:00:08.253 ********* 2025-07-05 22:44:41.350217 | orchestrator | changed: [testbed-manager] 2025-07-05 22:44:41.350227 | orchestrator | 2025-07-05 22:44:41.350238 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-07-05 22:44:41.350249 | orchestrator | Saturday 05 July 2025 22:44:35 +0000 (0:00:00.421) 0:00:08.675 ********* 2025-07-05 22:44:41.350260 | orchestrator | ok: [testbed-manager] 2025-07-05 22:44:41.350270 | orchestrator | 2025-07-05 22:44:41.350281 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-07-05 22:44:41.350292 | orchestrator | Saturday 05 July 2025 22:44:35 +0000 (0:00:00.520) 0:00:09.196 ********* 2025-07-05 22:44:41.350303 | orchestrator | ok: [testbed-manager] 2025-07-05 22:44:41.350314 | orchestrator | 2025-07-05 22:44:41.350325 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-07-05 22:44:41.350336 | orchestrator | Saturday 05 July 2025 22:44:36 +0000 (0:00:00.511) 0:00:09.707 ********* 2025-07-05 22:44:41.350347 | orchestrator | ok: [testbed-manager] 2025-07-05 22:44:41.350357 | orchestrator | 2025-07-05 22:44:41.350369 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-07-05 22:44:41.350380 | orchestrator | Saturday 05 July 2025 22:44:36 +0000 (0:00:00.398) 0:00:10.105 ********* 2025-07-05 22:44:41.350420 | orchestrator | changed: [testbed-manager] 2025-07-05 22:44:41.350434 | orchestrator | 2025-07-05 22:44:41.350515 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-07-05 22:44:41.350529 | orchestrator | Saturday 05 July 2025 22:44:37 +0000 (0:00:01.143) 0:00:11.249 ********* 2025-07-05 22:44:41.350541 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-05 22:44:41.350554 | orchestrator | changed: [testbed-manager] 2025-07-05 22:44:41.350566 | orchestrator | 2025-07-05 22:44:41.350582 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-07-05 22:44:41.350602 | orchestrator | Saturday 05 July 2025 22:44:38 +0000 (0:00:00.881) 0:00:12.130 ********* 2025-07-05 22:44:41.350620 | orchestrator | changed: [testbed-manager] 2025-07-05 22:44:41.350639 | orchestrator | 2025-07-05 22:44:41.350658 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-07-05 22:44:41.350679 | orchestrator | Saturday 05 July 2025 22:44:40 +0000 (0:00:01.579) 0:00:13.709 ********* 2025-07-05 22:44:41.350698 | orchestrator | changed: [testbed-manager] 2025-07-05 22:44:41.350712 | orchestrator | 2025-07-05 22:44:41.350725 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:44:41.350738 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:44:41.350751 | orchestrator | 2025-07-05 22:44:41.350762 | orchestrator | 2025-07-05 22:44:41.350773 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:44:41.350783 | orchestrator | Saturday 05 July 2025 22:44:41 +0000 (0:00:00.806) 0:00:14.516 ********* 2025-07-05 22:44:41.350794 | orchestrator | =============================================================================== 2025-07-05 22:44:41.350805 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.06s 2025-07-05 22:44:41.350815 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.58s 2025-07-05 22:44:41.350826 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.43s 2025-07-05 22:44:41.350837 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2025-07-05 22:44:41.350847 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2025-07-05 22:44:41.350858 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.81s 2025-07-05 22:44:41.350869 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-07-05 22:44:41.350879 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-07-05 22:44:41.350890 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.51s 2025-07-05 22:44:41.350901 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-07-05 22:44:41.350912 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-07-05 22:44:41.531969 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-07-05 22:44:41.568835 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-07-05 22:44:41.568933 | orchestrator | Dload Upload Total Spent Left Speed 2025-07-05 22:44:41.644395 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 184 0 --:--:-- --:--:-- --:--:-- 186 2025-07-05 22:44:41.657828 | orchestrator | + osism apply --environment custom workarounds 2025-07-05 22:44:43.344731 | orchestrator | 2025-07-05 22:44:43 | INFO  | Trying to run play workarounds in environment custom 2025-07-05 22:44:53.479275 | orchestrator | 2025-07-05 22:44:53 | INFO  | Task e205a0fa-38a3-4a19-836e-0a4375f60f1e (workarounds) was prepared for execution. 2025-07-05 22:44:53.479416 | orchestrator | 2025-07-05 22:44:53 | INFO  | It takes a moment until task e205a0fa-38a3-4a19-836e-0a4375f60f1e (workarounds) has been started and output is visible here. 2025-07-05 22:45:19.343372 | orchestrator | 2025-07-05 22:45:19.343492 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 22:45:19.343501 | orchestrator | 2025-07-05 22:45:19.343506 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-07-05 22:45:19.343510 | orchestrator | Saturday 05 July 2025 22:44:57 +0000 (0:00:00.141) 0:00:00.141 ********* 2025-07-05 22:45:19.343515 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-07-05 22:45:19.343520 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-07-05 22:45:19.343524 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-07-05 22:45:19.343527 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-07-05 22:45:19.343532 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-07-05 22:45:19.343536 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-07-05 22:45:19.343539 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-07-05 22:45:19.343543 | orchestrator | 2025-07-05 22:45:19.343547 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-07-05 22:45:19.343550 | orchestrator | 2025-07-05 22:45:19.343554 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-05 22:45:19.343558 | orchestrator | Saturday 05 July 2025 22:44:58 +0000 (0:00:00.754) 0:00:00.896 ********* 2025-07-05 22:45:19.343562 | orchestrator | ok: [testbed-manager] 2025-07-05 22:45:19.343567 | orchestrator | 2025-07-05 22:45:19.343570 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-07-05 22:45:19.343574 | orchestrator | 2025-07-05 22:45:19.343578 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-05 22:45:19.343593 | orchestrator | Saturday 05 July 2025 22:45:00 +0000 (0:00:02.362) 0:00:03.258 ********* 2025-07-05 22:45:19.343597 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:45:19.343601 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:45:19.343604 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:45:19.343608 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:45:19.343612 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:45:19.343615 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:45:19.343661 | orchestrator | 2025-07-05 22:45:19.343666 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-07-05 22:45:19.343669 | orchestrator | 2025-07-05 22:45:19.343673 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-07-05 22:45:19.343677 | orchestrator | Saturday 05 July 2025 22:45:02 +0000 (0:00:01.938) 0:00:05.196 ********* 2025-07-05 22:45:19.343682 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-05 22:45:19.343687 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-05 22:45:19.343690 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-05 22:45:19.343694 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-05 22:45:19.343698 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-05 22:45:19.343702 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-05 22:45:19.343705 | orchestrator | 2025-07-05 22:45:19.343710 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-07-05 22:45:19.343714 | orchestrator | Saturday 05 July 2025 22:45:03 +0000 (0:00:01.505) 0:00:06.701 ********* 2025-07-05 22:45:19.343717 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:45:19.343721 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:45:19.343725 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:45:19.343729 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:45:19.343732 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:45:19.343751 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:45:19.343755 | orchestrator | 2025-07-05 22:45:19.343759 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-07-05 22:45:19.343763 | orchestrator | Saturday 05 July 2025 22:45:07 +0000 (0:00:03.941) 0:00:10.643 ********* 2025-07-05 22:45:19.343766 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:45:19.343770 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:45:19.343774 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:45:19.343777 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:45:19.343781 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:45:19.343785 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:45:19.343788 | orchestrator | 2025-07-05 22:45:19.343792 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-07-05 22:45:19.343796 | orchestrator | 2025-07-05 22:45:19.343800 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-07-05 22:45:19.343803 | orchestrator | Saturday 05 July 2025 22:45:08 +0000 (0:00:00.683) 0:00:11.326 ********* 2025-07-05 22:45:19.343807 | orchestrator | changed: [testbed-manager] 2025-07-05 22:45:19.343811 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:45:19.343814 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:45:19.343818 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:45:19.343822 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:45:19.343825 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:45:19.343829 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:45:19.343833 | orchestrator | 2025-07-05 22:45:19.343836 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-07-05 22:45:19.343840 | orchestrator | Saturday 05 July 2025 22:45:10 +0000 (0:00:01.718) 0:00:13.045 ********* 2025-07-05 22:45:19.343844 | orchestrator | changed: [testbed-manager] 2025-07-05 22:45:19.343847 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:45:19.343851 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:45:19.343855 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:45:19.343858 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:45:19.343862 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:45:19.343878 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:45:19.343882 | orchestrator | 2025-07-05 22:45:19.343886 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-07-05 22:45:19.343890 | orchestrator | Saturday 05 July 2025 22:45:11 +0000 (0:00:01.655) 0:00:14.701 ********* 2025-07-05 22:45:19.343893 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:45:19.343897 | orchestrator | ok: [testbed-manager] 2025-07-05 22:45:19.343901 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:45:19.343905 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:45:19.343908 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:45:19.343912 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:45:19.343916 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:45:19.343919 | orchestrator | 2025-07-05 22:45:19.343923 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-07-05 22:45:19.343927 | orchestrator | Saturday 05 July 2025 22:45:13 +0000 (0:00:01.583) 0:00:16.284 ********* 2025-07-05 22:45:19.343931 | orchestrator | changed: [testbed-manager] 2025-07-05 22:45:19.343935 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:45:19.343940 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:45:19.343945 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:45:19.343949 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:45:19.343953 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:45:19.343957 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:45:19.343962 | orchestrator | 2025-07-05 22:45:19.343966 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-07-05 22:45:19.343970 | orchestrator | Saturday 05 July 2025 22:45:15 +0000 (0:00:01.876) 0:00:18.161 ********* 2025-07-05 22:45:19.343975 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:45:19.343979 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:45:19.343988 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:45:19.343992 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:45:19.343996 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:45:19.344000 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:45:19.344005 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:45:19.344009 | orchestrator | 2025-07-05 22:45:19.344017 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-07-05 22:45:19.344021 | orchestrator | 2025-07-05 22:45:19.344025 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-07-05 22:45:19.344030 | orchestrator | Saturday 05 July 2025 22:45:15 +0000 (0:00:00.634) 0:00:18.795 ********* 2025-07-05 22:45:19.344034 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:45:19.344039 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:45:19.344043 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:45:19.344047 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:45:19.344052 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:45:19.344056 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:45:19.344060 | orchestrator | ok: [testbed-manager] 2025-07-05 22:45:19.344064 | orchestrator | 2025-07-05 22:45:19.344069 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:45:19.344075 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:45:19.344081 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:19.344085 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:19.344089 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:19.344094 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:19.344098 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:19.344102 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:19.344107 | orchestrator | 2025-07-05 22:45:19.344113 | orchestrator | 2025-07-05 22:45:19.344119 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:45:19.344126 | orchestrator | Saturday 05 July 2025 22:45:19 +0000 (0:00:03.327) 0:00:22.123 ********* 2025-07-05 22:45:19.344132 | orchestrator | =============================================================================== 2025-07-05 22:45:19.344138 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.94s 2025-07-05 22:45:19.344145 | orchestrator | Install python3-docker -------------------------------------------------- 3.33s 2025-07-05 22:45:19.344151 | orchestrator | Apply netplan configuration --------------------------------------------- 2.36s 2025-07-05 22:45:19.344157 | orchestrator | Apply netplan configuration --------------------------------------------- 1.94s 2025-07-05 22:45:19.344163 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.88s 2025-07-05 22:45:19.344169 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2025-07-05 22:45:19.344175 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.66s 2025-07-05 22:45:19.344181 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.58s 2025-07-05 22:45:19.344187 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2025-07-05 22:45:19.344194 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.75s 2025-07-05 22:45:19.344205 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.68s 2025-07-05 22:45:19.344215 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-07-05 22:45:20.087314 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-07-05 22:45:32.052900 | orchestrator | 2025-07-05 22:45:32 | INFO  | Task daf22712-8805-4287-8ab6-a82a0d1988d3 (reboot) was prepared for execution. 2025-07-05 22:45:32.053044 | orchestrator | 2025-07-05 22:45:32 | INFO  | It takes a moment until task daf22712-8805-4287-8ab6-a82a0d1988d3 (reboot) has been started and output is visible here. 2025-07-05 22:45:41.715003 | orchestrator | 2025-07-05 22:45:41.715125 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-05 22:45:41.715164 | orchestrator | 2025-07-05 22:45:41.715177 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-05 22:45:41.715188 | orchestrator | Saturday 05 July 2025 22:45:35 +0000 (0:00:00.202) 0:00:00.202 ********* 2025-07-05 22:45:41.715199 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:45:41.715211 | orchestrator | 2025-07-05 22:45:41.715222 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-05 22:45:41.715233 | orchestrator | Saturday 05 July 2025 22:45:36 +0000 (0:00:00.101) 0:00:00.303 ********* 2025-07-05 22:45:41.715244 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:45:41.715255 | orchestrator | 2025-07-05 22:45:41.715266 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-05 22:45:41.715276 | orchestrator | Saturday 05 July 2025 22:45:37 +0000 (0:00:00.954) 0:00:01.258 ********* 2025-07-05 22:45:41.715287 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:45:41.715298 | orchestrator | 2025-07-05 22:45:41.715308 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-05 22:45:41.715319 | orchestrator | 2025-07-05 22:45:41.715330 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-05 22:45:41.715341 | orchestrator | Saturday 05 July 2025 22:45:37 +0000 (0:00:00.108) 0:00:01.366 ********* 2025-07-05 22:45:41.715352 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:45:41.715362 | orchestrator | 2025-07-05 22:45:41.715373 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-05 22:45:41.715442 | orchestrator | Saturday 05 July 2025 22:45:37 +0000 (0:00:00.080) 0:00:01.446 ********* 2025-07-05 22:45:41.715455 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:45:41.715466 | orchestrator | 2025-07-05 22:45:41.715476 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-05 22:45:41.715488 | orchestrator | Saturday 05 July 2025 22:45:37 +0000 (0:00:00.643) 0:00:02.089 ********* 2025-07-05 22:45:41.715499 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:45:41.715510 | orchestrator | 2025-07-05 22:45:41.715520 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-05 22:45:41.715531 | orchestrator | 2025-07-05 22:45:41.715544 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-05 22:45:41.715555 | orchestrator | Saturday 05 July 2025 22:45:37 +0000 (0:00:00.101) 0:00:02.191 ********* 2025-07-05 22:45:41.715569 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:45:41.715580 | orchestrator | 2025-07-05 22:45:41.715593 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-05 22:45:41.715605 | orchestrator | Saturday 05 July 2025 22:45:38 +0000 (0:00:00.150) 0:00:02.341 ********* 2025-07-05 22:45:41.715617 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:45:41.715629 | orchestrator | 2025-07-05 22:45:41.715641 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-05 22:45:41.715654 | orchestrator | Saturday 05 July 2025 22:45:38 +0000 (0:00:00.661) 0:00:03.003 ********* 2025-07-05 22:45:41.715666 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:45:41.715678 | orchestrator | 2025-07-05 22:45:41.715690 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-05 22:45:41.715728 | orchestrator | 2025-07-05 22:45:41.715742 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-05 22:45:41.715754 | orchestrator | Saturday 05 July 2025 22:45:38 +0000 (0:00:00.112) 0:00:03.115 ********* 2025-07-05 22:45:41.715767 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:45:41.715779 | orchestrator | 2025-07-05 22:45:41.715809 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-05 22:45:41.715822 | orchestrator | Saturday 05 July 2025 22:45:38 +0000 (0:00:00.087) 0:00:03.202 ********* 2025-07-05 22:45:41.715834 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:45:41.715847 | orchestrator | 2025-07-05 22:45:41.715859 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-05 22:45:41.715871 | orchestrator | Saturday 05 July 2025 22:45:39 +0000 (0:00:00.686) 0:00:03.889 ********* 2025-07-05 22:45:41.715884 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:45:41.715896 | orchestrator | 2025-07-05 22:45:41.715907 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-05 22:45:41.715919 | orchestrator | 2025-07-05 22:45:41.715929 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-05 22:45:41.715940 | orchestrator | Saturday 05 July 2025 22:45:39 +0000 (0:00:00.120) 0:00:04.009 ********* 2025-07-05 22:45:41.715951 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:45:41.715961 | orchestrator | 2025-07-05 22:45:41.715972 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-05 22:45:41.715982 | orchestrator | Saturday 05 July 2025 22:45:39 +0000 (0:00:00.101) 0:00:04.111 ********* 2025-07-05 22:45:41.715993 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:45:41.716003 | orchestrator | 2025-07-05 22:45:41.716014 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-05 22:45:41.716025 | orchestrator | Saturday 05 July 2025 22:45:40 +0000 (0:00:00.653) 0:00:04.764 ********* 2025-07-05 22:45:41.716036 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:45:41.716046 | orchestrator | 2025-07-05 22:45:41.716057 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-05 22:45:41.716068 | orchestrator | 2025-07-05 22:45:41.716078 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-05 22:45:41.716088 | orchestrator | Saturday 05 July 2025 22:45:40 +0000 (0:00:00.115) 0:00:04.879 ********* 2025-07-05 22:45:41.716099 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:45:41.716110 | orchestrator | 2025-07-05 22:45:41.716121 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-05 22:45:41.716131 | orchestrator | Saturday 05 July 2025 22:45:40 +0000 (0:00:00.102) 0:00:04.982 ********* 2025-07-05 22:45:41.716142 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:45:41.716152 | orchestrator | 2025-07-05 22:45:41.716163 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-05 22:45:41.716174 | orchestrator | Saturday 05 July 2025 22:45:41 +0000 (0:00:00.643) 0:00:05.625 ********* 2025-07-05 22:45:41.716203 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:45:41.716214 | orchestrator | 2025-07-05 22:45:41.716225 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:45:41.716237 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:41.716248 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:41.716259 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:41.716270 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:41.716286 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:41.716305 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:45:41.716316 | orchestrator | 2025-07-05 22:45:41.716327 | orchestrator | 2025-07-05 22:45:41.716338 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:45:41.716349 | orchestrator | Saturday 05 July 2025 22:45:41 +0000 (0:00:00.036) 0:00:05.662 ********* 2025-07-05 22:45:41.716359 | orchestrator | =============================================================================== 2025-07-05 22:45:41.716370 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.24s 2025-07-05 22:45:41.716409 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.62s 2025-07-05 22:45:41.716421 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2025-07-05 22:45:41.983915 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-07-05 22:45:54.009980 | orchestrator | 2025-07-05 22:45:54 | INFO  | Task 6cce930d-96f7-4763-9689-9542e7f9af14 (wait-for-connection) was prepared for execution. 2025-07-05 22:45:54.010200 | orchestrator | 2025-07-05 22:45:54 | INFO  | It takes a moment until task 6cce930d-96f7-4763-9689-9542e7f9af14 (wait-for-connection) has been started and output is visible here. 2025-07-05 22:46:09.956986 | orchestrator | 2025-07-05 22:46:09.957139 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-07-05 22:46:09.957186 | orchestrator | 2025-07-05 22:46:09.957206 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-07-05 22:46:09.957226 | orchestrator | Saturday 05 July 2025 22:45:57 +0000 (0:00:00.250) 0:00:00.250 ********* 2025-07-05 22:46:09.957246 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:46:09.957271 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:46:09.957290 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:46:09.957305 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:46:09.957316 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:46:09.957327 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:46:09.957352 | orchestrator | 2025-07-05 22:46:09.957396 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:46:09.957409 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:46:09.957422 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:46:09.957434 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:46:09.957445 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:46:09.957456 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:46:09.957467 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:46:09.957478 | orchestrator | 2025-07-05 22:46:09.957489 | orchestrator | 2025-07-05 22:46:09.957500 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:46:09.957511 | orchestrator | Saturday 05 July 2025 22:46:09 +0000 (0:00:11.650) 0:00:11.901 ********* 2025-07-05 22:46:09.957523 | orchestrator | =============================================================================== 2025-07-05 22:46:09.957537 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.65s 2025-07-05 22:46:10.227771 | orchestrator | + osism apply hddtemp 2025-07-05 22:46:22.257106 | orchestrator | 2025-07-05 22:46:22 | INFO  | Task 546665a2-7213-47db-b0a9-02575096ed86 (hddtemp) was prepared for execution. 2025-07-05 22:46:22.258110 | orchestrator | 2025-07-05 22:46:22 | INFO  | It takes a moment until task 546665a2-7213-47db-b0a9-02575096ed86 (hddtemp) has been started and output is visible here. 2025-07-05 22:46:48.972952 | orchestrator | 2025-07-05 22:46:48.973066 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-07-05 22:46:48.973083 | orchestrator | 2025-07-05 22:46:48.973094 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-07-05 22:46:48.973105 | orchestrator | Saturday 05 July 2025 22:46:26 +0000 (0:00:00.260) 0:00:00.260 ********* 2025-07-05 22:46:48.973115 | orchestrator | ok: [testbed-manager] 2025-07-05 22:46:48.973125 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:46:48.973135 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:46:48.973144 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:46:48.973154 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:46:48.973164 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:46:48.973174 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:46:48.973184 | orchestrator | 2025-07-05 22:46:48.973194 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-07-05 22:46:48.973204 | orchestrator | Saturday 05 July 2025 22:46:26 +0000 (0:00:00.724) 0:00:00.985 ********* 2025-07-05 22:46:48.973216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:46:48.973228 | orchestrator | 2025-07-05 22:46:48.973252 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-07-05 22:46:48.973263 | orchestrator | Saturday 05 July 2025 22:46:28 +0000 (0:00:01.187) 0:00:02.172 ********* 2025-07-05 22:46:48.973272 | orchestrator | ok: [testbed-manager] 2025-07-05 22:46:48.973282 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:46:48.973291 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:46:48.973301 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:46:48.973310 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:46:48.973365 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:46:48.973376 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:46:48.973386 | orchestrator | 2025-07-05 22:46:48.973396 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-07-05 22:46:48.973405 | orchestrator | Saturday 05 July 2025 22:46:30 +0000 (0:00:01.960) 0:00:04.133 ********* 2025-07-05 22:46:48.973415 | orchestrator | changed: [testbed-manager] 2025-07-05 22:46:48.973426 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:46:48.973442 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:46:48.973458 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:46:48.973473 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:46:48.973489 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:46:48.973505 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:46:48.973522 | orchestrator | 2025-07-05 22:46:48.973538 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-07-05 22:46:48.973554 | orchestrator | Saturday 05 July 2025 22:46:31 +0000 (0:00:01.151) 0:00:05.285 ********* 2025-07-05 22:46:48.973571 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:46:48.973589 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:46:48.973607 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:46:48.973623 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:46:48.973639 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:46:48.973656 | orchestrator | ok: [testbed-manager] 2025-07-05 22:46:48.973673 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:46:48.973689 | orchestrator | 2025-07-05 22:46:48.973705 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-07-05 22:46:48.973721 | orchestrator | Saturday 05 July 2025 22:46:32 +0000 (0:00:01.140) 0:00:06.425 ********* 2025-07-05 22:46:48.973738 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:46:48.973754 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:46:48.973801 | orchestrator | changed: [testbed-manager] 2025-07-05 22:46:48.973812 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:46:48.973822 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:46:48.973831 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:46:48.973841 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:46:48.973850 | orchestrator | 2025-07-05 22:46:48.973860 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-07-05 22:46:48.973869 | orchestrator | Saturday 05 July 2025 22:46:33 +0000 (0:00:00.828) 0:00:07.254 ********* 2025-07-05 22:46:48.973879 | orchestrator | changed: [testbed-manager] 2025-07-05 22:46:48.973888 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:46:48.973897 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:46:48.973906 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:46:48.973916 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:46:48.973926 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:46:48.973935 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:46:48.973945 | orchestrator | 2025-07-05 22:46:48.973954 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-07-05 22:46:48.973964 | orchestrator | Saturday 05 July 2025 22:46:45 +0000 (0:00:12.355) 0:00:19.609 ********* 2025-07-05 22:46:48.973974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:46:48.973984 | orchestrator | 2025-07-05 22:46:48.973993 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-07-05 22:46:48.974003 | orchestrator | Saturday 05 July 2025 22:46:46 +0000 (0:00:01.190) 0:00:20.800 ********* 2025-07-05 22:46:48.974012 | orchestrator | changed: [testbed-manager] 2025-07-05 22:46:48.974076 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:46:48.974086 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:46:48.974096 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:46:48.974105 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:46:48.974114 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:46:48.974124 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:46:48.974133 | orchestrator | 2025-07-05 22:46:48.974142 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:46:48.974152 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:46:48.974195 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:46:48.974214 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:46:48.974232 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:46:48.974248 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:46:48.974267 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:46:48.974284 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:46:48.974301 | orchestrator | 2025-07-05 22:46:48.974338 | orchestrator | 2025-07-05 22:46:48.974366 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:46:48.974383 | orchestrator | Saturday 05 July 2025 22:46:48 +0000 (0:00:01.831) 0:00:22.631 ********* 2025-07-05 22:46:48.974400 | orchestrator | =============================================================================== 2025-07-05 22:46:48.974430 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.36s 2025-07-05 22:46:48.974447 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.96s 2025-07-05 22:46:48.974462 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.83s 2025-07-05 22:46:48.974480 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.19s 2025-07-05 22:46:48.974496 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.19s 2025-07-05 22:46:48.974512 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.15s 2025-07-05 22:46:48.974529 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.14s 2025-07-05 22:46:48.974544 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.83s 2025-07-05 22:46:48.974560 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.72s 2025-07-05 22:46:49.243638 | orchestrator | ++ semver latest 7.1.1 2025-07-05 22:46:49.284544 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-05 22:46:49.284645 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-05 22:46:49.284663 | orchestrator | + sudo systemctl restart manager.service 2025-07-05 22:47:02.682813 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-05 22:47:02.682947 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-05 22:47:02.682962 | orchestrator | + local max_attempts=60 2025-07-05 22:47:02.682974 | orchestrator | + local name=ceph-ansible 2025-07-05 22:47:02.682984 | orchestrator | + local attempt_num=1 2025-07-05 22:47:02.682994 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:02.710793 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:02.710827 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:02.710837 | orchestrator | + sleep 5 2025-07-05 22:47:07.717105 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:07.747507 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:07.747627 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:07.747643 | orchestrator | + sleep 5 2025-07-05 22:47:12.750927 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:12.791444 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:12.791562 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:12.791575 | orchestrator | + sleep 5 2025-07-05 22:47:17.796153 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:17.832993 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:17.833098 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:17.833114 | orchestrator | + sleep 5 2025-07-05 22:47:22.837371 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:22.873768 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:22.873866 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:22.873881 | orchestrator | + sleep 5 2025-07-05 22:47:27.878109 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:27.913340 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:27.913423 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:27.913436 | orchestrator | + sleep 5 2025-07-05 22:47:32.918708 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:32.962628 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:32.962719 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:32.962733 | orchestrator | + sleep 5 2025-07-05 22:47:37.966901 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:38.000406 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:38.000532 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:38.000549 | orchestrator | + sleep 5 2025-07-05 22:47:43.002310 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:43.039123 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:43.039218 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:43.039233 | orchestrator | + sleep 5 2025-07-05 22:47:48.042280 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:48.078939 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:48.079009 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:48.079016 | orchestrator | + sleep 5 2025-07-05 22:47:53.083032 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:53.119734 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:53.119840 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:53.119856 | orchestrator | + sleep 5 2025-07-05 22:47:58.124755 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:47:58.159146 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-05 22:47:58.159245 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:47:58.159314 | orchestrator | + sleep 5 2025-07-05 22:48:03.164449 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:48:03.207986 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-05 22:48:03.208080 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-05 22:48:03.208095 | orchestrator | + sleep 5 2025-07-05 22:48:08.212923 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-05 22:48:08.251982 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:48:08.252074 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-05 22:48:08.252088 | orchestrator | + local max_attempts=60 2025-07-05 22:48:08.252100 | orchestrator | + local name=kolla-ansible 2025-07-05 22:48:08.252111 | orchestrator | + local attempt_num=1 2025-07-05 22:48:08.252122 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-05 22:48:08.289948 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:48:08.290102 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-05 22:48:08.290118 | orchestrator | + local max_attempts=60 2025-07-05 22:48:08.290130 | orchestrator | + local name=osism-ansible 2025-07-05 22:48:08.290142 | orchestrator | + local attempt_num=1 2025-07-05 22:48:08.290650 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-05 22:48:08.336206 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-05 22:48:08.336351 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-05 22:48:08.336367 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-05 22:48:08.492652 | orchestrator | ARA in ceph-ansible already disabled. 2025-07-05 22:48:08.654215 | orchestrator | ARA in kolla-ansible already disabled. 2025-07-05 22:48:08.827796 | orchestrator | ARA in osism-ansible already disabled. 2025-07-05 22:48:08.978801 | orchestrator | ARA in osism-kubernetes already disabled. 2025-07-05 22:48:08.981484 | orchestrator | + osism apply gather-facts 2025-07-05 22:48:20.952396 | orchestrator | 2025-07-05 22:48:20 | INFO  | Task 6151de10-fccd-468a-a9b7-703d81c1c908 (gather-facts) was prepared for execution. 2025-07-05 22:48:20.952492 | orchestrator | 2025-07-05 22:48:20 | INFO  | It takes a moment until task 6151de10-fccd-468a-a9b7-703d81c1c908 (gather-facts) has been started and output is visible here. 2025-07-05 22:48:33.787737 | orchestrator | 2025-07-05 22:48:33.787831 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-05 22:48:33.787842 | orchestrator | 2025-07-05 22:48:33.787851 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-05 22:48:33.787860 | orchestrator | Saturday 05 July 2025 22:48:24 +0000 (0:00:00.164) 0:00:00.164 ********* 2025-07-05 22:48:33.787868 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:48:33.787877 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:48:33.787885 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:48:33.787893 | orchestrator | ok: [testbed-manager] 2025-07-05 22:48:33.787901 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:48:33.787908 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:48:33.787916 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:48:33.787924 | orchestrator | 2025-07-05 22:48:33.787932 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-05 22:48:33.787939 | orchestrator | 2025-07-05 22:48:33.787947 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-05 22:48:33.787955 | orchestrator | Saturday 05 July 2025 22:48:32 +0000 (0:00:08.174) 0:00:08.338 ********* 2025-07-05 22:48:33.787963 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:48:33.787994 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:48:33.788002 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:48:33.788010 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:48:33.788018 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:48:33.788025 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:48:33.788033 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:48:33.788041 | orchestrator | 2025-07-05 22:48:33.788049 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:48:33.788057 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:48:33.788065 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:48:33.788073 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:48:33.788081 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:48:33.788089 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:48:33.788097 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:48:33.788105 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 22:48:33.788113 | orchestrator | 2025-07-05 22:48:33.788120 | orchestrator | 2025-07-05 22:48:33.788128 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:48:33.788136 | orchestrator | Saturday 05 July 2025 22:48:33 +0000 (0:00:00.523) 0:00:08.862 ********* 2025-07-05 22:48:33.788144 | orchestrator | =============================================================================== 2025-07-05 22:48:33.788152 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.17s 2025-07-05 22:48:33.788160 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-07-05 22:48:33.959168 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-07-05 22:48:33.968095 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-07-05 22:48:33.981383 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-07-05 22:48:33.995712 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-07-05 22:48:34.006612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-07-05 22:48:34.016685 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-07-05 22:48:34.028356 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-07-05 22:48:34.044126 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-07-05 22:48:34.056484 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-07-05 22:48:34.066390 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-07-05 22:48:34.073478 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-07-05 22:48:34.081347 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-07-05 22:48:34.088782 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-07-05 22:48:34.095909 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-07-05 22:48:34.102698 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-07-05 22:48:34.110655 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-07-05 22:48:34.117154 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-07-05 22:48:34.124288 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-07-05 22:48:34.130567 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-07-05 22:48:34.137571 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-07-05 22:48:34.146200 | orchestrator | + [[ false == \t\r\u\e ]] 2025-07-05 22:48:34.540961 | orchestrator | ok: Runtime: 0:22:38.460007 2025-07-05 22:48:34.643349 | 2025-07-05 22:48:34.643529 | TASK [Deploy services] 2025-07-05 22:48:35.175063 | orchestrator | skipping: Conditional result was False 2025-07-05 22:48:35.191095 | 2025-07-05 22:48:35.191256 | TASK [Deploy in a nutshell] 2025-07-05 22:48:35.921738 | orchestrator | + set -e 2025-07-05 22:48:35.922063 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-05 22:48:35.922096 | orchestrator | ++ export INTERACTIVE=false 2025-07-05 22:48:35.922119 | orchestrator | ++ INTERACTIVE=false 2025-07-05 22:48:35.922134 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-05 22:48:35.922147 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-05 22:48:35.922162 | orchestrator | + source /opt/manager-vars.sh 2025-07-05 22:48:35.922212 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-05 22:48:35.922276 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-05 22:48:35.922291 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-05 22:48:35.922307 | orchestrator | ++ CEPH_VERSION=reef 2025-07-05 22:48:35.922319 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-05 22:48:35.922338 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-05 22:48:35.922350 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-05 22:48:35.922372 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-05 22:48:35.922384 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-05 22:48:35.922413 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-05 22:48:35.922425 | orchestrator | ++ export ARA=false 2025-07-05 22:48:35.922437 | orchestrator | ++ ARA=false 2025-07-05 22:48:35.922449 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-05 22:48:35.922462 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-05 22:48:35.922473 | orchestrator | ++ export TEMPEST=false 2025-07-05 22:48:35.922485 | orchestrator | ++ TEMPEST=false 2025-07-05 22:48:35.922496 | orchestrator | ++ export IS_ZUUL=true 2025-07-05 22:48:35.922508 | orchestrator | ++ IS_ZUUL=true 2025-07-05 22:48:35.922519 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 22:48:35.922531 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 22:48:35.922542 | orchestrator | ++ export EXTERNAL_API=false 2025-07-05 22:48:35.922553 | orchestrator | ++ EXTERNAL_API=false 2025-07-05 22:48:35.922565 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-05 22:48:35.922576 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-05 22:48:35.922587 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-05 22:48:35.922598 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-05 22:48:35.922610 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-05 22:48:35.922621 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-05 22:48:35.922633 | orchestrator | 2025-07-05 22:48:35.922645 | orchestrator | # PULL IMAGES 2025-07-05 22:48:35.922656 | orchestrator | 2025-07-05 22:48:35.922668 | orchestrator | + echo 2025-07-05 22:48:35.922679 | orchestrator | + echo '# PULL IMAGES' 2025-07-05 22:48:35.922690 | orchestrator | + echo 2025-07-05 22:48:35.923784 | orchestrator | ++ semver latest 7.0.0 2025-07-05 22:48:35.982545 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-05 22:48:35.982667 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-05 22:48:35.982711 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-07-05 22:48:37.537195 | orchestrator | 2025-07-05 22:48:37 | INFO  | Trying to run play pull-images in environment custom 2025-07-05 22:48:47.645567 | orchestrator | 2025-07-05 22:48:47 | INFO  | Task c7b2521b-bb9d-42c4-b749-60660b207a2b (pull-images) was prepared for execution. 2025-07-05 22:48:47.645746 | orchestrator | 2025-07-05 22:48:47 | INFO  | It takes a moment until task c7b2521b-bb9d-42c4-b749-60660b207a2b (pull-images) has been started and output is visible here. 2025-07-05 22:50:57.451575 | orchestrator | 2025-07-05 22:50:57.451678 | orchestrator | PLAY [Pull images] ************************************************************* 2025-07-05 22:50:57.451687 | orchestrator | 2025-07-05 22:50:57.451691 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-07-05 22:50:57.451706 | orchestrator | Saturday 05 July 2025 22:48:51 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-07-05 22:50:57.451710 | orchestrator | changed: [testbed-manager] 2025-07-05 22:50:57.451716 | orchestrator | 2025-07-05 22:50:57.451720 | orchestrator | TASK [Pull other images] ******************************************************* 2025-07-05 22:50:57.451724 | orchestrator | Saturday 05 July 2025 22:50:00 +0000 (0:01:08.766) 0:01:08.925 ********* 2025-07-05 22:50:57.451729 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-07-05 22:50:57.451735 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-07-05 22:50:57.451740 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-07-05 22:50:57.451744 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-07-05 22:50:57.451798 | orchestrator | changed: [testbed-manager] => (item=common) 2025-07-05 22:50:57.451804 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-07-05 22:50:57.451824 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-07-05 22:50:57.451831 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-07-05 22:50:57.451836 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-07-05 22:50:57.451840 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-07-05 22:50:57.451844 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-07-05 22:50:57.451847 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-07-05 22:50:57.451852 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-07-05 22:50:57.451856 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-07-05 22:50:57.451859 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-07-05 22:50:57.451863 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-07-05 22:50:57.451867 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-07-05 22:50:57.451871 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-07-05 22:50:57.451875 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-07-05 22:50:57.451879 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-07-05 22:50:57.451882 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-07-05 22:50:57.451886 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-07-05 22:50:57.451890 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-07-05 22:50:57.451894 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-07-05 22:50:57.451898 | orchestrator | 2025-07-05 22:50:57.451901 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:50:57.451906 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:50:57.451911 | orchestrator | 2025-07-05 22:50:57.451915 | orchestrator | 2025-07-05 22:50:57.451919 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:50:57.451923 | orchestrator | Saturday 05 July 2025 22:50:57 +0000 (0:00:56.956) 0:02:05.882 ********* 2025-07-05 22:50:57.451927 | orchestrator | =============================================================================== 2025-07-05 22:50:57.451930 | orchestrator | Pull keystone image ---------------------------------------------------- 68.77s 2025-07-05 22:50:57.451934 | orchestrator | Pull other images ------------------------------------------------------ 56.96s 2025-07-05 22:50:59.368811 | orchestrator | 2025-07-05 22:50:59 | INFO  | Trying to run play wipe-partitions in environment custom 2025-07-05 22:51:09.631839 | orchestrator | 2025-07-05 22:51:09 | INFO  | Task 7d6f3d01-5082-486a-ae9c-6581a9648fe3 (wipe-partitions) was prepared for execution. 2025-07-05 22:51:09.631938 | orchestrator | 2025-07-05 22:51:09 | INFO  | It takes a moment until task 7d6f3d01-5082-486a-ae9c-6581a9648fe3 (wipe-partitions) has been started and output is visible here. 2025-07-05 22:51:21.181996 | orchestrator | 2025-07-05 22:51:21.182165 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-07-05 22:51:21.182182 | orchestrator | 2025-07-05 22:51:21.182195 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-07-05 22:51:21.182217 | orchestrator | Saturday 05 July 2025 22:51:13 +0000 (0:00:00.120) 0:00:00.120 ********* 2025-07-05 22:51:21.182229 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:51:21.182242 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:51:21.182254 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:51:21.182265 | orchestrator | 2025-07-05 22:51:21.182277 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-07-05 22:51:21.182289 | orchestrator | Saturday 05 July 2025 22:51:13 +0000 (0:00:00.555) 0:00:00.676 ********* 2025-07-05 22:51:21.182350 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:21.182362 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:51:21.182374 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:51:21.182413 | orchestrator | 2025-07-05 22:51:21.182425 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-07-05 22:51:21.182437 | orchestrator | Saturday 05 July 2025 22:51:13 +0000 (0:00:00.221) 0:00:00.897 ********* 2025-07-05 22:51:21.182448 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:51:21.182460 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:51:21.182472 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:51:21.182483 | orchestrator | 2025-07-05 22:51:21.182494 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-07-05 22:51:21.182505 | orchestrator | Saturday 05 July 2025 22:51:14 +0000 (0:00:00.654) 0:00:01.552 ********* 2025-07-05 22:51:21.182519 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:21.182531 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:51:21.182543 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:51:21.182555 | orchestrator | 2025-07-05 22:51:21.182567 | orchestrator | TASK [Check device availability] *********************************************** 2025-07-05 22:51:21.182580 | orchestrator | Saturday 05 July 2025 22:51:14 +0000 (0:00:00.244) 0:00:01.796 ********* 2025-07-05 22:51:21.182592 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-05 22:51:21.182605 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-05 22:51:21.182618 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-05 22:51:21.182631 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-05 22:51:21.182648 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-05 22:51:21.182661 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-05 22:51:21.182674 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-05 22:51:21.182687 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-05 22:51:21.182699 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-05 22:51:21.182711 | orchestrator | 2025-07-05 22:51:21.182724 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-07-05 22:51:21.182737 | orchestrator | Saturday 05 July 2025 22:51:16 +0000 (0:00:01.168) 0:00:02.965 ********* 2025-07-05 22:51:21.182750 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-07-05 22:51:21.182763 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-07-05 22:51:21.182774 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-07-05 22:51:21.182785 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-07-05 22:51:21.182796 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-07-05 22:51:21.182807 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-07-05 22:51:21.182819 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-07-05 22:51:21.182829 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-07-05 22:51:21.182841 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-07-05 22:51:21.182852 | orchestrator | 2025-07-05 22:51:21.182863 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-07-05 22:51:21.182874 | orchestrator | Saturday 05 July 2025 22:51:17 +0000 (0:00:01.334) 0:00:04.299 ********* 2025-07-05 22:51:21.182885 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-05 22:51:21.182896 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-05 22:51:21.182907 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-05 22:51:21.182918 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-05 22:51:21.182929 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-05 22:51:21.182940 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-05 22:51:21.182951 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-05 22:51:21.182962 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-05 22:51:21.182973 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-05 22:51:21.182984 | orchestrator | 2025-07-05 22:51:21.182996 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-07-05 22:51:21.183007 | orchestrator | Saturday 05 July 2025 22:51:19 +0000 (0:00:02.237) 0:00:06.537 ********* 2025-07-05 22:51:21.183027 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:51:21.183039 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:51:21.183050 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:51:21.183061 | orchestrator | 2025-07-05 22:51:21.183072 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-07-05 22:51:21.183084 | orchestrator | Saturday 05 July 2025 22:51:20 +0000 (0:00:00.637) 0:00:07.175 ********* 2025-07-05 22:51:21.183095 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:51:21.183106 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:51:21.183117 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:51:21.183128 | orchestrator | 2025-07-05 22:51:21.183139 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:51:21.183152 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:51:21.183164 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:51:21.183195 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:51:21.183207 | orchestrator | 2025-07-05 22:51:21.183218 | orchestrator | 2025-07-05 22:51:21.183236 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:51:21.183248 | orchestrator | Saturday 05 July 2025 22:51:20 +0000 (0:00:00.686) 0:00:07.861 ********* 2025-07-05 22:51:21.183259 | orchestrator | =============================================================================== 2025-07-05 22:51:21.183285 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.24s 2025-07-05 22:51:21.183311 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-07-05 22:51:21.183332 | orchestrator | Check device availability ----------------------------------------------- 1.17s 2025-07-05 22:51:21.183344 | orchestrator | Request device events from the kernel ----------------------------------- 0.69s 2025-07-05 22:51:21.183356 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.65s 2025-07-05 22:51:21.183367 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2025-07-05 22:51:21.183378 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.56s 2025-07-05 22:51:21.183389 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2025-07-05 22:51:21.183400 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2025-07-05 22:51:33.289612 | orchestrator | 2025-07-05 22:51:33 | INFO  | Task ee0f72ff-30e5-4dfa-b6e7-5ecf3c8da469 (facts) was prepared for execution. 2025-07-05 22:51:33.289730 | orchestrator | 2025-07-05 22:51:33 | INFO  | It takes a moment until task ee0f72ff-30e5-4dfa-b6e7-5ecf3c8da469 (facts) has been started and output is visible here. 2025-07-05 22:51:45.576403 | orchestrator | 2025-07-05 22:51:45.576522 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-05 22:51:45.576538 | orchestrator | 2025-07-05 22:51:45.576550 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-05 22:51:45.576563 | orchestrator | Saturday 05 July 2025 22:51:37 +0000 (0:00:00.267) 0:00:00.267 ********* 2025-07-05 22:51:45.576575 | orchestrator | ok: [testbed-manager] 2025-07-05 22:51:45.576587 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:51:45.576598 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:51:45.576609 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:51:45.576620 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:51:45.576631 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:51:45.576642 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:51:45.576655 | orchestrator | 2025-07-05 22:51:45.576674 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-05 22:51:45.576733 | orchestrator | Saturday 05 July 2025 22:51:38 +0000 (0:00:01.053) 0:00:01.320 ********* 2025-07-05 22:51:45.576758 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:51:45.576777 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:51:45.576795 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:51:45.576811 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:51:45.576827 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:45.576844 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:51:45.576861 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:51:45.576880 | orchestrator | 2025-07-05 22:51:45.576898 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-05 22:51:45.576917 | orchestrator | 2025-07-05 22:51:45.576936 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-05 22:51:45.576956 | orchestrator | Saturday 05 July 2025 22:51:39 +0000 (0:00:01.125) 0:00:02.445 ********* 2025-07-05 22:51:45.576975 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:51:45.576995 | orchestrator | ok: [testbed-manager] 2025-07-05 22:51:45.577013 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:51:45.577034 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:51:45.577053 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:51:45.577069 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:51:45.577081 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:51:45.577094 | orchestrator | 2025-07-05 22:51:45.577106 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-05 22:51:45.577119 | orchestrator | 2025-07-05 22:51:45.577132 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-05 22:51:45.577145 | orchestrator | Saturday 05 July 2025 22:51:44 +0000 (0:00:04.833) 0:00:07.279 ********* 2025-07-05 22:51:45.577158 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:51:45.577170 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:51:45.577181 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:51:45.577192 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:51:45.577203 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:45.577214 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:51:45.577225 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:51:45.577235 | orchestrator | 2025-07-05 22:51:45.577246 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:51:45.577258 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:51:45.577271 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:51:45.577315 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:51:45.577329 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:51:45.577359 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:51:45.577371 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:51:45.577382 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:51:45.577393 | orchestrator | 2025-07-05 22:51:45.577404 | orchestrator | 2025-07-05 22:51:45.577415 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:51:45.577426 | orchestrator | Saturday 05 July 2025 22:51:45 +0000 (0:00:00.502) 0:00:07.782 ********* 2025-07-05 22:51:45.577437 | orchestrator | =============================================================================== 2025-07-05 22:51:45.577448 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.83s 2025-07-05 22:51:45.577473 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.13s 2025-07-05 22:51:45.577484 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.05s 2025-07-05 22:51:45.577495 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-07-05 22:51:47.491076 | orchestrator | 2025-07-05 22:51:47 | INFO  | Task e8ff67a8-5caf-4441-8cba-7a05e7f00acd (ceph-configure-lvm-volumes) was prepared for execution. 2025-07-05 22:51:47.491183 | orchestrator | 2025-07-05 22:51:47 | INFO  | It takes a moment until task e8ff67a8-5caf-4441-8cba-7a05e7f00acd (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-07-05 22:51:59.405197 | orchestrator | 2025-07-05 22:51:59.405357 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-05 22:51:59.405377 | orchestrator | 2025-07-05 22:51:59.405389 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-05 22:51:59.405402 | orchestrator | Saturday 05 July 2025 22:51:51 +0000 (0:00:00.326) 0:00:00.326 ********* 2025-07-05 22:51:59.405414 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 22:51:59.405426 | orchestrator | 2025-07-05 22:51:59.405439 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-05 22:51:59.405450 | orchestrator | Saturday 05 July 2025 22:51:52 +0000 (0:00:00.265) 0:00:00.592 ********* 2025-07-05 22:51:59.405461 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:51:59.405474 | orchestrator | 2025-07-05 22:51:59.405485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.405496 | orchestrator | Saturday 05 July 2025 22:51:52 +0000 (0:00:00.248) 0:00:00.840 ********* 2025-07-05 22:51:59.405507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-05 22:51:59.405519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-05 22:51:59.405530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-05 22:51:59.405541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-05 22:51:59.405552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-05 22:51:59.405562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-05 22:51:59.405573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-05 22:51:59.405584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-05 22:51:59.405595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-05 22:51:59.405606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-05 22:51:59.405617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-05 22:51:59.405628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-05 22:51:59.405639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-05 22:51:59.405649 | orchestrator | 2025-07-05 22:51:59.405660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.405671 | orchestrator | Saturday 05 July 2025 22:51:52 +0000 (0:00:00.417) 0:00:01.258 ********* 2025-07-05 22:51:59.405683 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.405694 | orchestrator | 2025-07-05 22:51:59.405705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.405724 | orchestrator | Saturday 05 July 2025 22:51:53 +0000 (0:00:00.499) 0:00:01.758 ********* 2025-07-05 22:51:59.405736 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.405747 | orchestrator | 2025-07-05 22:51:59.405778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.405790 | orchestrator | Saturday 05 July 2025 22:51:53 +0000 (0:00:00.201) 0:00:01.959 ********* 2025-07-05 22:51:59.405801 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.405812 | orchestrator | 2025-07-05 22:51:59.405823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.405834 | orchestrator | Saturday 05 July 2025 22:51:53 +0000 (0:00:00.230) 0:00:02.190 ********* 2025-07-05 22:51:59.405845 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.405856 | orchestrator | 2025-07-05 22:51:59.405867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.405878 | orchestrator | Saturday 05 July 2025 22:51:53 +0000 (0:00:00.216) 0:00:02.406 ********* 2025-07-05 22:51:59.405889 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.405900 | orchestrator | 2025-07-05 22:51:59.405911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.405922 | orchestrator | Saturday 05 July 2025 22:51:54 +0000 (0:00:00.189) 0:00:02.596 ********* 2025-07-05 22:51:59.405933 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.405944 | orchestrator | 2025-07-05 22:51:59.405955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.405966 | orchestrator | Saturday 05 July 2025 22:51:54 +0000 (0:00:00.194) 0:00:02.790 ********* 2025-07-05 22:51:59.405977 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.405988 | orchestrator | 2025-07-05 22:51:59.405999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.406010 | orchestrator | Saturday 05 July 2025 22:51:54 +0000 (0:00:00.211) 0:00:03.002 ********* 2025-07-05 22:51:59.406079 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.406090 | orchestrator | 2025-07-05 22:51:59.406101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.406113 | orchestrator | Saturday 05 July 2025 22:51:54 +0000 (0:00:00.227) 0:00:03.229 ********* 2025-07-05 22:51:59.406123 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c) 2025-07-05 22:51:59.406136 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c) 2025-07-05 22:51:59.406147 | orchestrator | 2025-07-05 22:51:59.406158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.406169 | orchestrator | Saturday 05 July 2025 22:51:55 +0000 (0:00:00.431) 0:00:03.661 ********* 2025-07-05 22:51:59.406200 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f) 2025-07-05 22:51:59.406212 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f) 2025-07-05 22:51:59.406223 | orchestrator | 2025-07-05 22:51:59.406234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.406245 | orchestrator | Saturday 05 July 2025 22:51:55 +0000 (0:00:00.406) 0:00:04.068 ********* 2025-07-05 22:51:59.406256 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11) 2025-07-05 22:51:59.406267 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11) 2025-07-05 22:51:59.406308 | orchestrator | 2025-07-05 22:51:59.406319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.406331 | orchestrator | Saturday 05 July 2025 22:51:56 +0000 (0:00:00.589) 0:00:04.657 ********* 2025-07-05 22:51:59.406342 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7) 2025-07-05 22:51:59.406353 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7) 2025-07-05 22:51:59.406364 | orchestrator | 2025-07-05 22:51:59.406375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:51:59.406386 | orchestrator | Saturday 05 July 2025 22:51:56 +0000 (0:00:00.617) 0:00:05.274 ********* 2025-07-05 22:51:59.406405 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-05 22:51:59.406416 | orchestrator | 2025-07-05 22:51:59.406427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:51:59.406438 | orchestrator | Saturday 05 July 2025 22:51:57 +0000 (0:00:00.715) 0:00:05.990 ********* 2025-07-05 22:51:59.406449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-05 22:51:59.406460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-05 22:51:59.406471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-05 22:51:59.406482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-05 22:51:59.406493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-05 22:51:59.406504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-05 22:51:59.406515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-05 22:51:59.406526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-05 22:51:59.406537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-05 22:51:59.406548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-05 22:51:59.406559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-05 22:51:59.406569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-05 22:51:59.406586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-05 22:51:59.406597 | orchestrator | 2025-07-05 22:51:59.406608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:51:59.406619 | orchestrator | Saturday 05 July 2025 22:51:57 +0000 (0:00:00.394) 0:00:06.384 ********* 2025-07-05 22:51:59.406630 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.406641 | orchestrator | 2025-07-05 22:51:59.406652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:51:59.406663 | orchestrator | Saturday 05 July 2025 22:51:58 +0000 (0:00:00.204) 0:00:06.589 ********* 2025-07-05 22:51:59.406674 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.406685 | orchestrator | 2025-07-05 22:51:59.406696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:51:59.406708 | orchestrator | Saturday 05 July 2025 22:51:58 +0000 (0:00:00.195) 0:00:06.784 ********* 2025-07-05 22:51:59.406718 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.406729 | orchestrator | 2025-07-05 22:51:59.406740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:51:59.406751 | orchestrator | Saturday 05 July 2025 22:51:58 +0000 (0:00:00.193) 0:00:06.978 ********* 2025-07-05 22:51:59.406762 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.406773 | orchestrator | 2025-07-05 22:51:59.406784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:51:59.406795 | orchestrator | Saturday 05 July 2025 22:51:58 +0000 (0:00:00.186) 0:00:07.165 ********* 2025-07-05 22:51:59.406806 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.406817 | orchestrator | 2025-07-05 22:51:59.406828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:51:59.406839 | orchestrator | Saturday 05 July 2025 22:51:58 +0000 (0:00:00.208) 0:00:07.373 ********* 2025-07-05 22:51:59.406850 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.406861 | orchestrator | 2025-07-05 22:51:59.406872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:51:59.406890 | orchestrator | Saturday 05 July 2025 22:51:58 +0000 (0:00:00.192) 0:00:07.565 ********* 2025-07-05 22:51:59.406901 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:51:59.406912 | orchestrator | 2025-07-05 22:51:59.406923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:51:59.406934 | orchestrator | Saturday 05 July 2025 22:51:59 +0000 (0:00:00.189) 0:00:07.755 ********* 2025-07-05 22:51:59.406952 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.797472 | orchestrator | 2025-07-05 22:52:06.797557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:06.797568 | orchestrator | Saturday 05 July 2025 22:51:59 +0000 (0:00:00.219) 0:00:07.974 ********* 2025-07-05 22:52:06.797576 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-05 22:52:06.797584 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-05 22:52:06.797591 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-05 22:52:06.797598 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-05 22:52:06.797604 | orchestrator | 2025-07-05 22:52:06.797611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:06.797618 | orchestrator | Saturday 05 July 2025 22:52:00 +0000 (0:00:00.966) 0:00:08.941 ********* 2025-07-05 22:52:06.797625 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.797631 | orchestrator | 2025-07-05 22:52:06.797638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:06.797644 | orchestrator | Saturday 05 July 2025 22:52:00 +0000 (0:00:00.191) 0:00:09.132 ********* 2025-07-05 22:52:06.797651 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.797657 | orchestrator | 2025-07-05 22:52:06.797663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:06.797670 | orchestrator | Saturday 05 July 2025 22:52:00 +0000 (0:00:00.204) 0:00:09.336 ********* 2025-07-05 22:52:06.797676 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.797682 | orchestrator | 2025-07-05 22:52:06.797704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:06.797711 | orchestrator | Saturday 05 July 2025 22:52:00 +0000 (0:00:00.200) 0:00:09.537 ********* 2025-07-05 22:52:06.797717 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.797724 | orchestrator | 2025-07-05 22:52:06.797730 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-05 22:52:06.797736 | orchestrator | Saturday 05 July 2025 22:52:01 +0000 (0:00:00.214) 0:00:09.751 ********* 2025-07-05 22:52:06.797743 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-07-05 22:52:06.797749 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-07-05 22:52:06.797755 | orchestrator | 2025-07-05 22:52:06.797761 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-05 22:52:06.797768 | orchestrator | Saturday 05 July 2025 22:52:01 +0000 (0:00:00.190) 0:00:09.942 ********* 2025-07-05 22:52:06.797774 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.797780 | orchestrator | 2025-07-05 22:52:06.797787 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-05 22:52:06.797793 | orchestrator | Saturday 05 July 2025 22:52:01 +0000 (0:00:00.128) 0:00:10.070 ********* 2025-07-05 22:52:06.797799 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.797805 | orchestrator | 2025-07-05 22:52:06.797812 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-05 22:52:06.797818 | orchestrator | Saturday 05 July 2025 22:52:01 +0000 (0:00:00.142) 0:00:10.213 ********* 2025-07-05 22:52:06.797824 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.797831 | orchestrator | 2025-07-05 22:52:06.797837 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-05 22:52:06.797844 | orchestrator | Saturday 05 July 2025 22:52:01 +0000 (0:00:00.140) 0:00:10.354 ********* 2025-07-05 22:52:06.797850 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:52:06.797856 | orchestrator | 2025-07-05 22:52:06.797863 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-05 22:52:06.797887 | orchestrator | Saturday 05 July 2025 22:52:01 +0000 (0:00:00.150) 0:00:10.504 ********* 2025-07-05 22:52:06.797894 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8de564a6-401f-59e2-a445-234b3be175ce'}}) 2025-07-05 22:52:06.797901 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2634d3d6-ac41-59e6-b3da-1ade7ee25156'}}) 2025-07-05 22:52:06.797907 | orchestrator | 2025-07-05 22:52:06.797914 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-05 22:52:06.797920 | orchestrator | Saturday 05 July 2025 22:52:02 +0000 (0:00:00.165) 0:00:10.670 ********* 2025-07-05 22:52:06.797927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8de564a6-401f-59e2-a445-234b3be175ce'}})  2025-07-05 22:52:06.797939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2634d3d6-ac41-59e6-b3da-1ade7ee25156'}})  2025-07-05 22:52:06.797946 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.797952 | orchestrator | 2025-07-05 22:52:06.797958 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-05 22:52:06.797964 | orchestrator | Saturday 05 July 2025 22:52:02 +0000 (0:00:00.139) 0:00:10.810 ********* 2025-07-05 22:52:06.797971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8de564a6-401f-59e2-a445-234b3be175ce'}})  2025-07-05 22:52:06.797977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2634d3d6-ac41-59e6-b3da-1ade7ee25156'}})  2025-07-05 22:52:06.797983 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.797990 | orchestrator | 2025-07-05 22:52:06.797996 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-05 22:52:06.798002 | orchestrator | Saturday 05 July 2025 22:52:02 +0000 (0:00:00.153) 0:00:10.963 ********* 2025-07-05 22:52:06.798009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8de564a6-401f-59e2-a445-234b3be175ce'}})  2025-07-05 22:52:06.798052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2634d3d6-ac41-59e6-b3da-1ade7ee25156'}})  2025-07-05 22:52:06.798061 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.798069 | orchestrator | 2025-07-05 22:52:06.798090 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-05 22:52:06.798098 | orchestrator | Saturday 05 July 2025 22:52:02 +0000 (0:00:00.358) 0:00:11.321 ********* 2025-07-05 22:52:06.798105 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:52:06.798113 | orchestrator | 2025-07-05 22:52:06.798171 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-05 22:52:06.798179 | orchestrator | Saturday 05 July 2025 22:52:02 +0000 (0:00:00.140) 0:00:11.461 ********* 2025-07-05 22:52:06.798186 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:52:06.798194 | orchestrator | 2025-07-05 22:52:06.798201 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-05 22:52:06.798208 | orchestrator | Saturday 05 July 2025 22:52:03 +0000 (0:00:00.143) 0:00:11.605 ********* 2025-07-05 22:52:06.798215 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.798222 | orchestrator | 2025-07-05 22:52:06.798229 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-05 22:52:06.798236 | orchestrator | Saturday 05 July 2025 22:52:03 +0000 (0:00:00.137) 0:00:11.742 ********* 2025-07-05 22:52:06.798244 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.798251 | orchestrator | 2025-07-05 22:52:06.798258 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-05 22:52:06.798290 | orchestrator | Saturday 05 July 2025 22:52:03 +0000 (0:00:00.139) 0:00:11.882 ********* 2025-07-05 22:52:06.798297 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.798304 | orchestrator | 2025-07-05 22:52:06.798312 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-05 22:52:06.798326 | orchestrator | Saturday 05 July 2025 22:52:03 +0000 (0:00:00.136) 0:00:12.019 ********* 2025-07-05 22:52:06.798334 | orchestrator | ok: [testbed-node-3] => { 2025-07-05 22:52:06.798341 | orchestrator |  "ceph_osd_devices": { 2025-07-05 22:52:06.798348 | orchestrator |  "sdb": { 2025-07-05 22:52:06.798355 | orchestrator |  "osd_lvm_uuid": "8de564a6-401f-59e2-a445-234b3be175ce" 2025-07-05 22:52:06.798363 | orchestrator |  }, 2025-07-05 22:52:06.798371 | orchestrator |  "sdc": { 2025-07-05 22:52:06.798378 | orchestrator |  "osd_lvm_uuid": "2634d3d6-ac41-59e6-b3da-1ade7ee25156" 2025-07-05 22:52:06.798385 | orchestrator |  } 2025-07-05 22:52:06.798392 | orchestrator |  } 2025-07-05 22:52:06.798400 | orchestrator | } 2025-07-05 22:52:06.798407 | orchestrator | 2025-07-05 22:52:06.798413 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-05 22:52:06.798420 | orchestrator | Saturday 05 July 2025 22:52:03 +0000 (0:00:00.142) 0:00:12.161 ********* 2025-07-05 22:52:06.798426 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.798432 | orchestrator | 2025-07-05 22:52:06.798439 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-05 22:52:06.798445 | orchestrator | Saturday 05 July 2025 22:52:03 +0000 (0:00:00.143) 0:00:12.304 ********* 2025-07-05 22:52:06.798451 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.798457 | orchestrator | 2025-07-05 22:52:06.798464 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-05 22:52:06.798470 | orchestrator | Saturday 05 July 2025 22:52:03 +0000 (0:00:00.136) 0:00:12.440 ********* 2025-07-05 22:52:06.798476 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:52:06.798482 | orchestrator | 2025-07-05 22:52:06.798489 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-05 22:52:06.798495 | orchestrator | Saturday 05 July 2025 22:52:03 +0000 (0:00:00.135) 0:00:12.576 ********* 2025-07-05 22:52:06.798501 | orchestrator | changed: [testbed-node-3] => { 2025-07-05 22:52:06.798507 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-05 22:52:06.798514 | orchestrator |  "ceph_osd_devices": { 2025-07-05 22:52:06.798520 | orchestrator |  "sdb": { 2025-07-05 22:52:06.798527 | orchestrator |  "osd_lvm_uuid": "8de564a6-401f-59e2-a445-234b3be175ce" 2025-07-05 22:52:06.798533 | orchestrator |  }, 2025-07-05 22:52:06.798539 | orchestrator |  "sdc": { 2025-07-05 22:52:06.798545 | orchestrator |  "osd_lvm_uuid": "2634d3d6-ac41-59e6-b3da-1ade7ee25156" 2025-07-05 22:52:06.798552 | orchestrator |  } 2025-07-05 22:52:06.798562 | orchestrator |  }, 2025-07-05 22:52:06.798568 | orchestrator |  "lvm_volumes": [ 2025-07-05 22:52:06.798575 | orchestrator |  { 2025-07-05 22:52:06.798581 | orchestrator |  "data": "osd-block-8de564a6-401f-59e2-a445-234b3be175ce", 2025-07-05 22:52:06.798588 | orchestrator |  "data_vg": "ceph-8de564a6-401f-59e2-a445-234b3be175ce" 2025-07-05 22:52:06.798594 | orchestrator |  }, 2025-07-05 22:52:06.798601 | orchestrator |  { 2025-07-05 22:52:06.798607 | orchestrator |  "data": "osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156", 2025-07-05 22:52:06.798613 | orchestrator |  "data_vg": "ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156" 2025-07-05 22:52:06.798620 | orchestrator |  } 2025-07-05 22:52:06.798626 | orchestrator |  ] 2025-07-05 22:52:06.798632 | orchestrator |  } 2025-07-05 22:52:06.798643 | orchestrator | } 2025-07-05 22:52:06.798650 | orchestrator | 2025-07-05 22:52:06.798657 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-05 22:52:06.798663 | orchestrator | Saturday 05 July 2025 22:52:04 +0000 (0:00:00.228) 0:00:12.805 ********* 2025-07-05 22:52:06.798669 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 22:52:06.798676 | orchestrator | 2025-07-05 22:52:06.798682 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-05 22:52:06.798689 | orchestrator | 2025-07-05 22:52:06.798699 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-05 22:52:06.798705 | orchestrator | Saturday 05 July 2025 22:52:06 +0000 (0:00:02.065) 0:00:14.870 ********* 2025-07-05 22:52:06.798712 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-05 22:52:06.798718 | orchestrator | 2025-07-05 22:52:06.798725 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-05 22:52:06.798731 | orchestrator | Saturday 05 July 2025 22:52:06 +0000 (0:00:00.264) 0:00:15.135 ********* 2025-07-05 22:52:06.798737 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:52:06.798744 | orchestrator | 2025-07-05 22:52:06.798750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:06.798762 | orchestrator | Saturday 05 July 2025 22:52:06 +0000 (0:00:00.236) 0:00:15.371 ********* 2025-07-05 22:52:14.730135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-05 22:52:14.730241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-05 22:52:14.730280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-05 22:52:14.730292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-05 22:52:14.730302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-05 22:52:14.730312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-05 22:52:14.730323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-05 22:52:14.730332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-05 22:52:14.730342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-05 22:52:14.730352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-05 22:52:14.730362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-05 22:52:14.730371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-05 22:52:14.730381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-05 22:52:14.730391 | orchestrator | 2025-07-05 22:52:14.730402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730412 | orchestrator | Saturday 05 July 2025 22:52:07 +0000 (0:00:00.382) 0:00:15.754 ********* 2025-07-05 22:52:14.730422 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.730433 | orchestrator | 2025-07-05 22:52:14.730443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730453 | orchestrator | Saturday 05 July 2025 22:52:07 +0000 (0:00:00.205) 0:00:15.959 ********* 2025-07-05 22:52:14.730463 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.730473 | orchestrator | 2025-07-05 22:52:14.730482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730492 | orchestrator | Saturday 05 July 2025 22:52:07 +0000 (0:00:00.188) 0:00:16.147 ********* 2025-07-05 22:52:14.730502 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.730512 | orchestrator | 2025-07-05 22:52:14.730522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730532 | orchestrator | Saturday 05 July 2025 22:52:07 +0000 (0:00:00.193) 0:00:16.340 ********* 2025-07-05 22:52:14.730542 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.730551 | orchestrator | 2025-07-05 22:52:14.730561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730571 | orchestrator | Saturday 05 July 2025 22:52:07 +0000 (0:00:00.195) 0:00:16.536 ********* 2025-07-05 22:52:14.730581 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.730590 | orchestrator | 2025-07-05 22:52:14.730600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730631 | orchestrator | Saturday 05 July 2025 22:52:08 +0000 (0:00:00.191) 0:00:16.727 ********* 2025-07-05 22:52:14.730642 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.730653 | orchestrator | 2025-07-05 22:52:14.730664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730675 | orchestrator | Saturday 05 July 2025 22:52:08 +0000 (0:00:00.580) 0:00:17.308 ********* 2025-07-05 22:52:14.730685 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.730696 | orchestrator | 2025-07-05 22:52:14.730708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730719 | orchestrator | Saturday 05 July 2025 22:52:08 +0000 (0:00:00.214) 0:00:17.523 ********* 2025-07-05 22:52:14.730745 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.730757 | orchestrator | 2025-07-05 22:52:14.730768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730779 | orchestrator | Saturday 05 July 2025 22:52:09 +0000 (0:00:00.216) 0:00:17.739 ********* 2025-07-05 22:52:14.730790 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189) 2025-07-05 22:52:14.730802 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189) 2025-07-05 22:52:14.730813 | orchestrator | 2025-07-05 22:52:14.730824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730835 | orchestrator | Saturday 05 July 2025 22:52:09 +0000 (0:00:00.439) 0:00:18.178 ********* 2025-07-05 22:52:14.730847 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123) 2025-07-05 22:52:14.730858 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123) 2025-07-05 22:52:14.730868 | orchestrator | 2025-07-05 22:52:14.730878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730888 | orchestrator | Saturday 05 July 2025 22:52:10 +0000 (0:00:00.439) 0:00:18.617 ********* 2025-07-05 22:52:14.730898 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc) 2025-07-05 22:52:14.730907 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc) 2025-07-05 22:52:14.730917 | orchestrator | 2025-07-05 22:52:14.730927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.730937 | orchestrator | Saturday 05 July 2025 22:52:10 +0000 (0:00:00.409) 0:00:19.027 ********* 2025-07-05 22:52:14.730963 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b) 2025-07-05 22:52:14.730974 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b) 2025-07-05 22:52:14.730984 | orchestrator | 2025-07-05 22:52:14.730994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:14.731003 | orchestrator | Saturday 05 July 2025 22:52:10 +0000 (0:00:00.448) 0:00:19.475 ********* 2025-07-05 22:52:14.731019 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-05 22:52:14.731034 | orchestrator | 2025-07-05 22:52:14.731050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731075 | orchestrator | Saturday 05 July 2025 22:52:11 +0000 (0:00:00.351) 0:00:19.827 ********* 2025-07-05 22:52:14.731093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-05 22:52:14.731106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-05 22:52:14.731122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-05 22:52:14.731137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-05 22:52:14.731151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-05 22:52:14.731177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-05 22:52:14.731191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-05 22:52:14.731204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-05 22:52:14.731218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-05 22:52:14.731232 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-05 22:52:14.731246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-05 22:52:14.731283 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-05 22:52:14.731298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-05 22:52:14.731313 | orchestrator | 2025-07-05 22:52:14.731328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731343 | orchestrator | Saturday 05 July 2025 22:52:11 +0000 (0:00:00.407) 0:00:20.234 ********* 2025-07-05 22:52:14.731359 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.731375 | orchestrator | 2025-07-05 22:52:14.731390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731405 | orchestrator | Saturday 05 July 2025 22:52:11 +0000 (0:00:00.213) 0:00:20.448 ********* 2025-07-05 22:52:14.731419 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.731435 | orchestrator | 2025-07-05 22:52:14.731451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731467 | orchestrator | Saturday 05 July 2025 22:52:12 +0000 (0:00:00.675) 0:00:21.123 ********* 2025-07-05 22:52:14.731483 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.731499 | orchestrator | 2025-07-05 22:52:14.731511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731521 | orchestrator | Saturday 05 July 2025 22:52:12 +0000 (0:00:00.218) 0:00:21.342 ********* 2025-07-05 22:52:14.731530 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.731540 | orchestrator | 2025-07-05 22:52:14.731550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731559 | orchestrator | Saturday 05 July 2025 22:52:12 +0000 (0:00:00.204) 0:00:21.546 ********* 2025-07-05 22:52:14.731569 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.731578 | orchestrator | 2025-07-05 22:52:14.731596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731607 | orchestrator | Saturday 05 July 2025 22:52:13 +0000 (0:00:00.207) 0:00:21.754 ********* 2025-07-05 22:52:14.731616 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.731626 | orchestrator | 2025-07-05 22:52:14.731636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731646 | orchestrator | Saturday 05 July 2025 22:52:13 +0000 (0:00:00.207) 0:00:21.961 ********* 2025-07-05 22:52:14.731655 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.731665 | orchestrator | 2025-07-05 22:52:14.731675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731684 | orchestrator | Saturday 05 July 2025 22:52:13 +0000 (0:00:00.221) 0:00:22.183 ********* 2025-07-05 22:52:14.731694 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.731703 | orchestrator | 2025-07-05 22:52:14.731713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731723 | orchestrator | Saturday 05 July 2025 22:52:13 +0000 (0:00:00.238) 0:00:22.421 ********* 2025-07-05 22:52:14.731732 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-05 22:52:14.731743 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-05 22:52:14.731752 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-05 22:52:14.731770 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-05 22:52:14.731780 | orchestrator | 2025-07-05 22:52:14.731790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:14.731799 | orchestrator | Saturday 05 July 2025 22:52:14 +0000 (0:00:00.668) 0:00:23.090 ********* 2025-07-05 22:52:14.731809 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:14.731819 | orchestrator | 2025-07-05 22:52:14.731839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:21.646085 | orchestrator | Saturday 05 July 2025 22:52:14 +0000 (0:00:00.214) 0:00:23.305 ********* 2025-07-05 22:52:21.646198 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.646216 | orchestrator | 2025-07-05 22:52:21.646229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:21.646241 | orchestrator | Saturday 05 July 2025 22:52:14 +0000 (0:00:00.201) 0:00:23.506 ********* 2025-07-05 22:52:21.646305 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.646317 | orchestrator | 2025-07-05 22:52:21.646329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:21.646340 | orchestrator | Saturday 05 July 2025 22:52:15 +0000 (0:00:00.234) 0:00:23.741 ********* 2025-07-05 22:52:21.646352 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.646364 | orchestrator | 2025-07-05 22:52:21.646376 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-05 22:52:21.646388 | orchestrator | Saturday 05 July 2025 22:52:15 +0000 (0:00:00.214) 0:00:23.956 ********* 2025-07-05 22:52:21.646399 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-07-05 22:52:21.646410 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-07-05 22:52:21.646421 | orchestrator | 2025-07-05 22:52:21.646433 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-05 22:52:21.646444 | orchestrator | Saturday 05 July 2025 22:52:15 +0000 (0:00:00.393) 0:00:24.349 ********* 2025-07-05 22:52:21.646455 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.646466 | orchestrator | 2025-07-05 22:52:21.646478 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-05 22:52:21.646489 | orchestrator | Saturday 05 July 2025 22:52:15 +0000 (0:00:00.143) 0:00:24.493 ********* 2025-07-05 22:52:21.646500 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.646511 | orchestrator | 2025-07-05 22:52:21.646522 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-05 22:52:21.646533 | orchestrator | Saturday 05 July 2025 22:52:16 +0000 (0:00:00.146) 0:00:24.639 ********* 2025-07-05 22:52:21.646544 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.646556 | orchestrator | 2025-07-05 22:52:21.646569 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-05 22:52:21.646582 | orchestrator | Saturday 05 July 2025 22:52:16 +0000 (0:00:00.148) 0:00:24.787 ********* 2025-07-05 22:52:21.646594 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:52:21.646607 | orchestrator | 2025-07-05 22:52:21.646619 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-05 22:52:21.646632 | orchestrator | Saturday 05 July 2025 22:52:16 +0000 (0:00:00.147) 0:00:24.935 ********* 2025-07-05 22:52:21.646645 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b5adb4f-945c-5107-b1d3-f691d6050e0c'}}) 2025-07-05 22:52:21.646658 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '24fdde66-e3ee-586c-8774-3b73abfeacc0'}}) 2025-07-05 22:52:21.646671 | orchestrator | 2025-07-05 22:52:21.646683 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-05 22:52:21.646696 | orchestrator | Saturday 05 July 2025 22:52:16 +0000 (0:00:00.178) 0:00:25.113 ********* 2025-07-05 22:52:21.646709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b5adb4f-945c-5107-b1d3-f691d6050e0c'}})  2025-07-05 22:52:21.646723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '24fdde66-e3ee-586c-8774-3b73abfeacc0'}})  2025-07-05 22:52:21.646760 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.646773 | orchestrator | 2025-07-05 22:52:21.646785 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-05 22:52:21.646798 | orchestrator | Saturday 05 July 2025 22:52:16 +0000 (0:00:00.151) 0:00:25.265 ********* 2025-07-05 22:52:21.646810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b5adb4f-945c-5107-b1d3-f691d6050e0c'}})  2025-07-05 22:52:21.646824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '24fdde66-e3ee-586c-8774-3b73abfeacc0'}})  2025-07-05 22:52:21.646836 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.646849 | orchestrator | 2025-07-05 22:52:21.646861 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-05 22:52:21.646873 | orchestrator | Saturday 05 July 2025 22:52:16 +0000 (0:00:00.171) 0:00:25.436 ********* 2025-07-05 22:52:21.646885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b5adb4f-945c-5107-b1d3-f691d6050e0c'}})  2025-07-05 22:52:21.646898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '24fdde66-e3ee-586c-8774-3b73abfeacc0'}})  2025-07-05 22:52:21.646910 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.646922 | orchestrator | 2025-07-05 22:52:21.646933 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-05 22:52:21.646966 | orchestrator | Saturday 05 July 2025 22:52:17 +0000 (0:00:00.164) 0:00:25.601 ********* 2025-07-05 22:52:21.646977 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:52:21.646989 | orchestrator | 2025-07-05 22:52:21.647000 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-05 22:52:21.647011 | orchestrator | Saturday 05 July 2025 22:52:17 +0000 (0:00:00.149) 0:00:25.750 ********* 2025-07-05 22:52:21.647022 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:52:21.647033 | orchestrator | 2025-07-05 22:52:21.647044 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-05 22:52:21.647055 | orchestrator | Saturday 05 July 2025 22:52:17 +0000 (0:00:00.133) 0:00:25.884 ********* 2025-07-05 22:52:21.647067 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.647078 | orchestrator | 2025-07-05 22:52:21.647108 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-05 22:52:21.647120 | orchestrator | Saturday 05 July 2025 22:52:17 +0000 (0:00:00.133) 0:00:26.017 ********* 2025-07-05 22:52:21.647131 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.647142 | orchestrator | 2025-07-05 22:52:21.647153 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-05 22:52:21.647165 | orchestrator | Saturday 05 July 2025 22:52:17 +0000 (0:00:00.352) 0:00:26.370 ********* 2025-07-05 22:52:21.647176 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.647187 | orchestrator | 2025-07-05 22:52:21.647198 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-05 22:52:21.647209 | orchestrator | Saturday 05 July 2025 22:52:17 +0000 (0:00:00.156) 0:00:26.526 ********* 2025-07-05 22:52:21.647220 | orchestrator | ok: [testbed-node-4] => { 2025-07-05 22:52:21.647231 | orchestrator |  "ceph_osd_devices": { 2025-07-05 22:52:21.647243 | orchestrator |  "sdb": { 2025-07-05 22:52:21.647273 | orchestrator |  "osd_lvm_uuid": "9b5adb4f-945c-5107-b1d3-f691d6050e0c" 2025-07-05 22:52:21.647284 | orchestrator |  }, 2025-07-05 22:52:21.647295 | orchestrator |  "sdc": { 2025-07-05 22:52:21.647306 | orchestrator |  "osd_lvm_uuid": "24fdde66-e3ee-586c-8774-3b73abfeacc0" 2025-07-05 22:52:21.647318 | orchestrator |  } 2025-07-05 22:52:21.647329 | orchestrator |  } 2025-07-05 22:52:21.647340 | orchestrator | } 2025-07-05 22:52:21.647352 | orchestrator | 2025-07-05 22:52:21.647363 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-05 22:52:21.647374 | orchestrator | Saturday 05 July 2025 22:52:18 +0000 (0:00:00.152) 0:00:26.679 ********* 2025-07-05 22:52:21.647393 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.647405 | orchestrator | 2025-07-05 22:52:21.647431 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-05 22:52:21.647453 | orchestrator | Saturday 05 July 2025 22:52:18 +0000 (0:00:00.134) 0:00:26.814 ********* 2025-07-05 22:52:21.647464 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.647476 | orchestrator | 2025-07-05 22:52:21.647487 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-05 22:52:21.647498 | orchestrator | Saturday 05 July 2025 22:52:18 +0000 (0:00:00.154) 0:00:26.968 ********* 2025-07-05 22:52:21.647509 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:52:21.647520 | orchestrator | 2025-07-05 22:52:21.647532 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-05 22:52:21.647543 | orchestrator | Saturday 05 July 2025 22:52:18 +0000 (0:00:00.141) 0:00:27.109 ********* 2025-07-05 22:52:21.647554 | orchestrator | changed: [testbed-node-4] => { 2025-07-05 22:52:21.647565 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-05 22:52:21.647577 | orchestrator |  "ceph_osd_devices": { 2025-07-05 22:52:21.647588 | orchestrator |  "sdb": { 2025-07-05 22:52:21.647599 | orchestrator |  "osd_lvm_uuid": "9b5adb4f-945c-5107-b1d3-f691d6050e0c" 2025-07-05 22:52:21.647610 | orchestrator |  }, 2025-07-05 22:52:21.647621 | orchestrator |  "sdc": { 2025-07-05 22:52:21.647633 | orchestrator |  "osd_lvm_uuid": "24fdde66-e3ee-586c-8774-3b73abfeacc0" 2025-07-05 22:52:21.647644 | orchestrator |  } 2025-07-05 22:52:21.647655 | orchestrator |  }, 2025-07-05 22:52:21.647666 | orchestrator |  "lvm_volumes": [ 2025-07-05 22:52:21.647677 | orchestrator |  { 2025-07-05 22:52:21.647689 | orchestrator |  "data": "osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c", 2025-07-05 22:52:21.647700 | orchestrator |  "data_vg": "ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c" 2025-07-05 22:52:21.647711 | orchestrator |  }, 2025-07-05 22:52:21.647723 | orchestrator |  { 2025-07-05 22:52:21.647734 | orchestrator |  "data": "osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0", 2025-07-05 22:52:21.647745 | orchestrator |  "data_vg": "ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0" 2025-07-05 22:52:21.647756 | orchestrator |  } 2025-07-05 22:52:21.647767 | orchestrator |  ] 2025-07-05 22:52:21.647778 | orchestrator |  } 2025-07-05 22:52:21.647789 | orchestrator | } 2025-07-05 22:52:21.647801 | orchestrator | 2025-07-05 22:52:21.647812 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-05 22:52:21.647823 | orchestrator | Saturday 05 July 2025 22:52:18 +0000 (0:00:00.209) 0:00:27.319 ********* 2025-07-05 22:52:21.647834 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-05 22:52:21.647846 | orchestrator | 2025-07-05 22:52:21.647857 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-05 22:52:21.647868 | orchestrator | 2025-07-05 22:52:21.647879 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-05 22:52:21.647890 | orchestrator | Saturday 05 July 2025 22:52:19 +0000 (0:00:01.218) 0:00:28.538 ********* 2025-07-05 22:52:21.647901 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-05 22:52:21.647912 | orchestrator | 2025-07-05 22:52:21.647923 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-05 22:52:21.647935 | orchestrator | Saturday 05 July 2025 22:52:20 +0000 (0:00:00.504) 0:00:29.042 ********* 2025-07-05 22:52:21.647946 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:52:21.647957 | orchestrator | 2025-07-05 22:52:21.647968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:21.647979 | orchestrator | Saturday 05 July 2025 22:52:21 +0000 (0:00:00.742) 0:00:29.784 ********* 2025-07-05 22:52:21.647990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-05 22:52:21.648008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-05 22:52:21.648019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-05 22:52:21.648030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-05 22:52:21.648041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-05 22:52:21.648052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-05 22:52:21.648070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-05 22:52:30.505354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-05 22:52:30.505458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-05 22:52:30.505473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-05 22:52:30.505484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-05 22:52:30.505496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-05 22:52:30.505507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-05 22:52:30.505518 | orchestrator | 2025-07-05 22:52:30.505533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.505574 | orchestrator | Saturday 05 July 2025 22:52:21 +0000 (0:00:00.430) 0:00:30.215 ********* 2025-07-05 22:52:30.505587 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.505600 | orchestrator | 2025-07-05 22:52:30.505611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.505622 | orchestrator | Saturday 05 July 2025 22:52:21 +0000 (0:00:00.246) 0:00:30.461 ********* 2025-07-05 22:52:30.505633 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.505644 | orchestrator | 2025-07-05 22:52:30.505655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.505666 | orchestrator | Saturday 05 July 2025 22:52:22 +0000 (0:00:00.232) 0:00:30.693 ********* 2025-07-05 22:52:30.505677 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.505688 | orchestrator | 2025-07-05 22:52:30.505699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.505710 | orchestrator | Saturday 05 July 2025 22:52:22 +0000 (0:00:00.209) 0:00:30.902 ********* 2025-07-05 22:52:30.505721 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.505732 | orchestrator | 2025-07-05 22:52:30.505743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.505754 | orchestrator | Saturday 05 July 2025 22:52:22 +0000 (0:00:00.234) 0:00:31.137 ********* 2025-07-05 22:52:30.505765 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.505776 | orchestrator | 2025-07-05 22:52:30.505787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.505798 | orchestrator | Saturday 05 July 2025 22:52:22 +0000 (0:00:00.212) 0:00:31.350 ********* 2025-07-05 22:52:30.505809 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.505821 | orchestrator | 2025-07-05 22:52:30.505832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.505844 | orchestrator | Saturday 05 July 2025 22:52:22 +0000 (0:00:00.191) 0:00:31.541 ********* 2025-07-05 22:52:30.505857 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.505869 | orchestrator | 2025-07-05 22:52:30.505881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.505893 | orchestrator | Saturday 05 July 2025 22:52:23 +0000 (0:00:00.217) 0:00:31.758 ********* 2025-07-05 22:52:30.505906 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.505919 | orchestrator | 2025-07-05 22:52:30.505931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.505966 | orchestrator | Saturday 05 July 2025 22:52:23 +0000 (0:00:00.210) 0:00:31.969 ********* 2025-07-05 22:52:30.505979 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122) 2025-07-05 22:52:30.505992 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122) 2025-07-05 22:52:30.506005 | orchestrator | 2025-07-05 22:52:30.506078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.506094 | orchestrator | Saturday 05 July 2025 22:52:24 +0000 (0:00:00.684) 0:00:32.654 ********* 2025-07-05 22:52:30.506107 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871) 2025-07-05 22:52:30.506119 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871) 2025-07-05 22:52:30.506132 | orchestrator | 2025-07-05 22:52:30.506144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.506157 | orchestrator | Saturday 05 July 2025 22:52:24 +0000 (0:00:00.891) 0:00:33.545 ********* 2025-07-05 22:52:30.506169 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c) 2025-07-05 22:52:30.506182 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c) 2025-07-05 22:52:30.506194 | orchestrator | 2025-07-05 22:52:30.506207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.506218 | orchestrator | Saturday 05 July 2025 22:52:25 +0000 (0:00:00.431) 0:00:33.977 ********* 2025-07-05 22:52:30.506229 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c) 2025-07-05 22:52:30.506262 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c) 2025-07-05 22:52:30.506274 | orchestrator | 2025-07-05 22:52:30.506285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:52:30.506296 | orchestrator | Saturday 05 July 2025 22:52:25 +0000 (0:00:00.470) 0:00:34.447 ********* 2025-07-05 22:52:30.506307 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-05 22:52:30.506318 | orchestrator | 2025-07-05 22:52:30.506329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506340 | orchestrator | Saturday 05 July 2025 22:52:26 +0000 (0:00:00.351) 0:00:34.799 ********* 2025-07-05 22:52:30.506370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-05 22:52:30.506382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-05 22:52:30.506393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-05 22:52:30.506404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-05 22:52:30.506415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-05 22:52:30.506426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-05 22:52:30.506437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-05 22:52:30.506448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-05 22:52:30.506459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-05 22:52:30.506470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-05 22:52:30.506481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-05 22:52:30.506492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-05 22:52:30.506512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-05 22:52:30.506523 | orchestrator | 2025-07-05 22:52:30.506534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506545 | orchestrator | Saturday 05 July 2025 22:52:26 +0000 (0:00:00.384) 0:00:35.183 ********* 2025-07-05 22:52:30.506556 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.506567 | orchestrator | 2025-07-05 22:52:30.506579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506590 | orchestrator | Saturday 05 July 2025 22:52:26 +0000 (0:00:00.225) 0:00:35.409 ********* 2025-07-05 22:52:30.506601 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.506612 | orchestrator | 2025-07-05 22:52:30.506623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506634 | orchestrator | Saturday 05 July 2025 22:52:27 +0000 (0:00:00.228) 0:00:35.637 ********* 2025-07-05 22:52:30.506645 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.506656 | orchestrator | 2025-07-05 22:52:30.506667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506679 | orchestrator | Saturday 05 July 2025 22:52:27 +0000 (0:00:00.228) 0:00:35.866 ********* 2025-07-05 22:52:30.506689 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.506700 | orchestrator | 2025-07-05 22:52:30.506712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506723 | orchestrator | Saturday 05 July 2025 22:52:27 +0000 (0:00:00.231) 0:00:36.097 ********* 2025-07-05 22:52:30.506734 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.506745 | orchestrator | 2025-07-05 22:52:30.506756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506767 | orchestrator | Saturday 05 July 2025 22:52:27 +0000 (0:00:00.210) 0:00:36.308 ********* 2025-07-05 22:52:30.506778 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.506789 | orchestrator | 2025-07-05 22:52:30.506800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506818 | orchestrator | Saturday 05 July 2025 22:52:28 +0000 (0:00:00.677) 0:00:36.985 ********* 2025-07-05 22:52:30.506829 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.506840 | orchestrator | 2025-07-05 22:52:30.506851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506862 | orchestrator | Saturday 05 July 2025 22:52:28 +0000 (0:00:00.255) 0:00:37.241 ********* 2025-07-05 22:52:30.506873 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.506884 | orchestrator | 2025-07-05 22:52:30.506895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506906 | orchestrator | Saturday 05 July 2025 22:52:28 +0000 (0:00:00.206) 0:00:37.448 ********* 2025-07-05 22:52:30.506917 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-05 22:52:30.506928 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-05 22:52:30.506940 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-05 22:52:30.506951 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-05 22:52:30.506962 | orchestrator | 2025-07-05 22:52:30.506973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.506985 | orchestrator | Saturday 05 July 2025 22:52:29 +0000 (0:00:00.689) 0:00:38.137 ********* 2025-07-05 22:52:30.506995 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.507007 | orchestrator | 2025-07-05 22:52:30.507018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.507034 | orchestrator | Saturday 05 July 2025 22:52:29 +0000 (0:00:00.217) 0:00:38.354 ********* 2025-07-05 22:52:30.507045 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.507057 | orchestrator | 2025-07-05 22:52:30.507068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.507079 | orchestrator | Saturday 05 July 2025 22:52:29 +0000 (0:00:00.226) 0:00:38.581 ********* 2025-07-05 22:52:30.507096 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.507108 | orchestrator | 2025-07-05 22:52:30.507119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:52:30.507130 | orchestrator | Saturday 05 July 2025 22:52:30 +0000 (0:00:00.200) 0:00:38.781 ********* 2025-07-05 22:52:30.507141 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:30.507152 | orchestrator | 2025-07-05 22:52:30.507163 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-05 22:52:30.507180 | orchestrator | Saturday 05 July 2025 22:52:30 +0000 (0:00:00.293) 0:00:39.075 ********* 2025-07-05 22:52:34.992536 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-07-05 22:52:34.992644 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-07-05 22:52:34.992659 | orchestrator | 2025-07-05 22:52:34.992672 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-05 22:52:34.992683 | orchestrator | Saturday 05 July 2025 22:52:30 +0000 (0:00:00.211) 0:00:39.286 ********* 2025-07-05 22:52:34.992695 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.992706 | orchestrator | 2025-07-05 22:52:34.992718 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-05 22:52:34.992729 | orchestrator | Saturday 05 July 2025 22:52:30 +0000 (0:00:00.156) 0:00:39.443 ********* 2025-07-05 22:52:34.992740 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.992750 | orchestrator | 2025-07-05 22:52:34.992762 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-05 22:52:34.992772 | orchestrator | Saturday 05 July 2025 22:52:31 +0000 (0:00:00.147) 0:00:39.590 ********* 2025-07-05 22:52:34.992783 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.992794 | orchestrator | 2025-07-05 22:52:34.992805 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-05 22:52:34.992816 | orchestrator | Saturday 05 July 2025 22:52:31 +0000 (0:00:00.158) 0:00:39.749 ********* 2025-07-05 22:52:34.992827 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:52:34.992839 | orchestrator | 2025-07-05 22:52:34.992850 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-05 22:52:34.992861 | orchestrator | Saturday 05 July 2025 22:52:31 +0000 (0:00:00.347) 0:00:40.097 ********* 2025-07-05 22:52:34.992872 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '469f88b0-11f8-5147-93f6-bf0afec867dc'}}) 2025-07-05 22:52:34.992884 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2969909f-2c17-514e-91b3-dec9da8cf58e'}}) 2025-07-05 22:52:34.992894 | orchestrator | 2025-07-05 22:52:34.992906 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-05 22:52:34.992917 | orchestrator | Saturday 05 July 2025 22:52:31 +0000 (0:00:00.186) 0:00:40.283 ********* 2025-07-05 22:52:34.992928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '469f88b0-11f8-5147-93f6-bf0afec867dc'}})  2025-07-05 22:52:34.992941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2969909f-2c17-514e-91b3-dec9da8cf58e'}})  2025-07-05 22:52:34.992952 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.992963 | orchestrator | 2025-07-05 22:52:34.992975 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-05 22:52:34.992986 | orchestrator | Saturday 05 July 2025 22:52:31 +0000 (0:00:00.169) 0:00:40.453 ********* 2025-07-05 22:52:34.992997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '469f88b0-11f8-5147-93f6-bf0afec867dc'}})  2025-07-05 22:52:34.993008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2969909f-2c17-514e-91b3-dec9da8cf58e'}})  2025-07-05 22:52:34.993019 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.993030 | orchestrator | 2025-07-05 22:52:34.993041 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-05 22:52:34.993076 | orchestrator | Saturday 05 July 2025 22:52:32 +0000 (0:00:00.153) 0:00:40.607 ********* 2025-07-05 22:52:34.993089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '469f88b0-11f8-5147-93f6-bf0afec867dc'}})  2025-07-05 22:52:34.993102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2969909f-2c17-514e-91b3-dec9da8cf58e'}})  2025-07-05 22:52:34.993114 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.993126 | orchestrator | 2025-07-05 22:52:34.993139 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-05 22:52:34.993151 | orchestrator | Saturday 05 July 2025 22:52:32 +0000 (0:00:00.157) 0:00:40.765 ********* 2025-07-05 22:52:34.993163 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:52:34.993175 | orchestrator | 2025-07-05 22:52:34.993188 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-05 22:52:34.993200 | orchestrator | Saturday 05 July 2025 22:52:32 +0000 (0:00:00.150) 0:00:40.915 ********* 2025-07-05 22:52:34.993212 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:52:34.993224 | orchestrator | 2025-07-05 22:52:34.993266 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-05 22:52:34.993286 | orchestrator | Saturday 05 July 2025 22:52:32 +0000 (0:00:00.161) 0:00:41.077 ********* 2025-07-05 22:52:34.993307 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.993325 | orchestrator | 2025-07-05 22:52:34.993344 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-05 22:52:34.993364 | orchestrator | Saturday 05 July 2025 22:52:32 +0000 (0:00:00.151) 0:00:41.229 ********* 2025-07-05 22:52:34.993382 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.993399 | orchestrator | 2025-07-05 22:52:34.993417 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-05 22:52:34.993436 | orchestrator | Saturday 05 July 2025 22:52:32 +0000 (0:00:00.166) 0:00:41.395 ********* 2025-07-05 22:52:34.993454 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.993473 | orchestrator | 2025-07-05 22:52:34.993492 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-05 22:52:34.993509 | orchestrator | Saturday 05 July 2025 22:52:32 +0000 (0:00:00.174) 0:00:41.570 ********* 2025-07-05 22:52:34.993521 | orchestrator | ok: [testbed-node-5] => { 2025-07-05 22:52:34.993532 | orchestrator |  "ceph_osd_devices": { 2025-07-05 22:52:34.993543 | orchestrator |  "sdb": { 2025-07-05 22:52:34.993559 | orchestrator |  "osd_lvm_uuid": "469f88b0-11f8-5147-93f6-bf0afec867dc" 2025-07-05 22:52:34.993592 | orchestrator |  }, 2025-07-05 22:52:34.993604 | orchestrator |  "sdc": { 2025-07-05 22:52:34.993615 | orchestrator |  "osd_lvm_uuid": "2969909f-2c17-514e-91b3-dec9da8cf58e" 2025-07-05 22:52:34.993626 | orchestrator |  } 2025-07-05 22:52:34.993637 | orchestrator |  } 2025-07-05 22:52:34.993648 | orchestrator | } 2025-07-05 22:52:34.993659 | orchestrator | 2025-07-05 22:52:34.993670 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-05 22:52:34.993681 | orchestrator | Saturday 05 July 2025 22:52:33 +0000 (0:00:00.135) 0:00:41.705 ********* 2025-07-05 22:52:34.993692 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.993703 | orchestrator | 2025-07-05 22:52:34.993714 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-05 22:52:34.993725 | orchestrator | Saturday 05 July 2025 22:52:33 +0000 (0:00:00.131) 0:00:41.836 ********* 2025-07-05 22:52:34.993735 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.993746 | orchestrator | 2025-07-05 22:52:34.993757 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-05 22:52:34.993768 | orchestrator | Saturday 05 July 2025 22:52:33 +0000 (0:00:00.375) 0:00:42.212 ********* 2025-07-05 22:52:34.993779 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:52:34.993790 | orchestrator | 2025-07-05 22:52:34.993801 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-05 22:52:34.993811 | orchestrator | Saturday 05 July 2025 22:52:33 +0000 (0:00:00.166) 0:00:42.378 ********* 2025-07-05 22:52:34.993833 | orchestrator | changed: [testbed-node-5] => { 2025-07-05 22:52:34.993843 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-05 22:52:34.993874 | orchestrator |  "ceph_osd_devices": { 2025-07-05 22:52:34.993886 | orchestrator |  "sdb": { 2025-07-05 22:52:34.993896 | orchestrator |  "osd_lvm_uuid": "469f88b0-11f8-5147-93f6-bf0afec867dc" 2025-07-05 22:52:34.993908 | orchestrator |  }, 2025-07-05 22:52:34.993918 | orchestrator |  "sdc": { 2025-07-05 22:52:34.993929 | orchestrator |  "osd_lvm_uuid": "2969909f-2c17-514e-91b3-dec9da8cf58e" 2025-07-05 22:52:34.993940 | orchestrator |  } 2025-07-05 22:52:34.993951 | orchestrator |  }, 2025-07-05 22:52:34.993962 | orchestrator |  "lvm_volumes": [ 2025-07-05 22:52:34.993973 | orchestrator |  { 2025-07-05 22:52:34.993984 | orchestrator |  "data": "osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc", 2025-07-05 22:52:34.993995 | orchestrator |  "data_vg": "ceph-469f88b0-11f8-5147-93f6-bf0afec867dc" 2025-07-05 22:52:34.994005 | orchestrator |  }, 2025-07-05 22:52:34.994070 | orchestrator |  { 2025-07-05 22:52:34.994085 | orchestrator |  "data": "osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e", 2025-07-05 22:52:34.994096 | orchestrator |  "data_vg": "ceph-2969909f-2c17-514e-91b3-dec9da8cf58e" 2025-07-05 22:52:34.994107 | orchestrator |  } 2025-07-05 22:52:34.994118 | orchestrator |  ] 2025-07-05 22:52:34.994129 | orchestrator |  } 2025-07-05 22:52:34.994140 | orchestrator | } 2025-07-05 22:52:34.994151 | orchestrator | 2025-07-05 22:52:34.994162 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-05 22:52:34.994173 | orchestrator | Saturday 05 July 2025 22:52:34 +0000 (0:00:00.232) 0:00:42.611 ********* 2025-07-05 22:52:34.994184 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-05 22:52:34.994195 | orchestrator | 2025-07-05 22:52:34.994206 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:52:34.994217 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-05 22:52:34.994229 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-05 22:52:34.994287 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-05 22:52:34.994300 | orchestrator | 2025-07-05 22:52:34.994311 | orchestrator | 2025-07-05 22:52:34.994321 | orchestrator | 2025-07-05 22:52:34.994333 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:52:34.994344 | orchestrator | Saturday 05 July 2025 22:52:34 +0000 (0:00:00.941) 0:00:43.552 ********* 2025-07-05 22:52:34.994355 | orchestrator | =============================================================================== 2025-07-05 22:52:34.994366 | orchestrator | Write configuration file ------------------------------------------------ 4.23s 2025-07-05 22:52:34.994376 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2025-07-05 22:52:34.994387 | orchestrator | Get initial list of available block devices ----------------------------- 1.23s 2025-07-05 22:52:34.994398 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2025-07-05 22:52:34.994412 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.03s 2025-07-05 22:52:34.994431 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2025-07-05 22:52:34.994458 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2025-07-05 22:52:34.994479 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.80s 2025-07-05 22:52:34.994506 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-07-05 22:52:34.994539 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-07-05 22:52:34.994557 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-07-05 22:52:34.994575 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.68s 2025-07-05 22:52:34.994592 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-07-05 22:52:34.994612 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-07-05 22:52:34.994642 | orchestrator | Print configuration data ------------------------------------------------ 0.67s 2025-07-05 22:52:35.319777 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-07-05 22:52:35.319881 | orchestrator | Print DB devices -------------------------------------------------------- 0.67s 2025-07-05 22:52:35.319896 | orchestrator | Set WAL devices config data --------------------------------------------- 0.66s 2025-07-05 22:52:35.319908 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.65s 2025-07-05 22:52:35.319920 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-07-05 22:52:57.759920 | orchestrator | 2025-07-05 22:52:57 | INFO  | Task f4ce453c-96da-44da-a6cf-60227a015b35 (sync inventory) is running in background. Output coming soon. 2025-07-05 22:53:16.881433 | orchestrator | 2025-07-05 22:52:59 | INFO  | Starting group_vars file reorganization 2025-07-05 22:53:16.881546 | orchestrator | 2025-07-05 22:52:59 | INFO  | Moved 0 file(s) to their respective directories 2025-07-05 22:53:16.881562 | orchestrator | 2025-07-05 22:52:59 | INFO  | Group_vars file reorganization completed 2025-07-05 22:53:16.881575 | orchestrator | 2025-07-05 22:53:01 | INFO  | Starting variable preparation from inventory 2025-07-05 22:53:16.881586 | orchestrator | 2025-07-05 22:53:02 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-07-05 22:53:16.881598 | orchestrator | 2025-07-05 22:53:02 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-07-05 22:53:16.881610 | orchestrator | 2025-07-05 22:53:02 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-07-05 22:53:16.881621 | orchestrator | 2025-07-05 22:53:02 | INFO  | 3 file(s) written, 6 host(s) processed 2025-07-05 22:53:16.881632 | orchestrator | 2025-07-05 22:53:02 | INFO  | Variable preparation completed 2025-07-05 22:53:16.881644 | orchestrator | 2025-07-05 22:53:03 | INFO  | Starting inventory overwrite handling 2025-07-05 22:53:16.881655 | orchestrator | 2025-07-05 22:53:03 | INFO  | Handling group overwrites in 99-overwrite 2025-07-05 22:53:16.881666 | orchestrator | 2025-07-05 22:53:03 | INFO  | Removing group frr:children from 60-generic 2025-07-05 22:53:16.881678 | orchestrator | 2025-07-05 22:53:03 | INFO  | Removing group storage:children from 50-kolla 2025-07-05 22:53:16.881689 | orchestrator | 2025-07-05 22:53:03 | INFO  | Removing group netbird:children from 50-infrastruture 2025-07-05 22:53:16.881700 | orchestrator | 2025-07-05 22:53:03 | INFO  | Removing group ceph-mds from 50-ceph 2025-07-05 22:53:16.881712 | orchestrator | 2025-07-05 22:53:03 | INFO  | Removing group ceph-rgw from 50-ceph 2025-07-05 22:53:16.881723 | orchestrator | 2025-07-05 22:53:03 | INFO  | Handling group overwrites in 20-roles 2025-07-05 22:53:16.881735 | orchestrator | 2025-07-05 22:53:03 | INFO  | Removing group k3s_node from 50-infrastruture 2025-07-05 22:53:16.881746 | orchestrator | 2025-07-05 22:53:03 | INFO  | Removed 6 group(s) in total 2025-07-05 22:53:16.881758 | orchestrator | 2025-07-05 22:53:03 | INFO  | Inventory overwrite handling completed 2025-07-05 22:53:16.881769 | orchestrator | 2025-07-05 22:53:04 | INFO  | Starting merge of inventory files 2025-07-05 22:53:16.881811 | orchestrator | 2025-07-05 22:53:04 | INFO  | Inventory files merged successfully 2025-07-05 22:53:16.881823 | orchestrator | 2025-07-05 22:53:08 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-07-05 22:53:16.881834 | orchestrator | 2025-07-05 22:53:15 | INFO  | Successfully wrote ClusterShell configuration 2025-07-05 22:53:16.881846 | orchestrator | [master ee9178b] 2025-07-05-22-53 2025-07-05 22:53:16.881858 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-07-05 22:53:19.021183 | orchestrator | 2025-07-05 22:53:19 | INFO  | Task 07aa267b-1b24-4d87-ba23-b1e021e8aa42 (ceph-create-lvm-devices) was prepared for execution. 2025-07-05 22:53:19.021379 | orchestrator | 2025-07-05 22:53:19 | INFO  | It takes a moment until task 07aa267b-1b24-4d87-ba23-b1e021e8aa42 (ceph-create-lvm-devices) has been started and output is visible here. 2025-07-05 22:53:30.420344 | orchestrator | 2025-07-05 22:53:30.420454 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-05 22:53:30.420471 | orchestrator | 2025-07-05 22:53:30.420483 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-05 22:53:30.420495 | orchestrator | Saturday 05 July 2025 22:53:23 +0000 (0:00:00.313) 0:00:00.313 ********* 2025-07-05 22:53:30.420507 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 22:53:30.420518 | orchestrator | 2025-07-05 22:53:30.420530 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-05 22:53:30.420541 | orchestrator | Saturday 05 July 2025 22:53:23 +0000 (0:00:00.248) 0:00:00.561 ********* 2025-07-05 22:53:30.420552 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:53:30.420565 | orchestrator | 2025-07-05 22:53:30.420576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.420587 | orchestrator | Saturday 05 July 2025 22:53:23 +0000 (0:00:00.226) 0:00:00.787 ********* 2025-07-05 22:53:30.420599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-05 22:53:30.420611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-05 22:53:30.420622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-05 22:53:30.420651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-05 22:53:30.420663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-05 22:53:30.420674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-05 22:53:30.420685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-05 22:53:30.420696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-05 22:53:30.420707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-05 22:53:30.420718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-05 22:53:30.420729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-05 22:53:30.420740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-05 22:53:30.420751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-05 22:53:30.420762 | orchestrator | 2025-07-05 22:53:30.420773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.420784 | orchestrator | Saturday 05 July 2025 22:53:23 +0000 (0:00:00.403) 0:00:01.191 ********* 2025-07-05 22:53:30.420795 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.420807 | orchestrator | 2025-07-05 22:53:30.420818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.420864 | orchestrator | Saturday 05 July 2025 22:53:24 +0000 (0:00:00.433) 0:00:01.625 ********* 2025-07-05 22:53:30.420885 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.420905 | orchestrator | 2025-07-05 22:53:30.420924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.420943 | orchestrator | Saturday 05 July 2025 22:53:24 +0000 (0:00:00.215) 0:00:01.840 ********* 2025-07-05 22:53:30.420963 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.420984 | orchestrator | 2025-07-05 22:53:30.421007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.421028 | orchestrator | Saturday 05 July 2025 22:53:24 +0000 (0:00:00.189) 0:00:02.030 ********* 2025-07-05 22:53:30.421044 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.421056 | orchestrator | 2025-07-05 22:53:30.421069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.421081 | orchestrator | Saturday 05 July 2025 22:53:25 +0000 (0:00:00.190) 0:00:02.221 ********* 2025-07-05 22:53:30.421093 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.421105 | orchestrator | 2025-07-05 22:53:30.421117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.421130 | orchestrator | Saturday 05 July 2025 22:53:25 +0000 (0:00:00.197) 0:00:02.419 ********* 2025-07-05 22:53:30.421142 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.421154 | orchestrator | 2025-07-05 22:53:30.421167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.421207 | orchestrator | Saturday 05 July 2025 22:53:25 +0000 (0:00:00.212) 0:00:02.631 ********* 2025-07-05 22:53:30.421221 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.421232 | orchestrator | 2025-07-05 22:53:30.421244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.421255 | orchestrator | Saturday 05 July 2025 22:53:25 +0000 (0:00:00.211) 0:00:02.842 ********* 2025-07-05 22:53:30.421267 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.421277 | orchestrator | 2025-07-05 22:53:30.421289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.421300 | orchestrator | Saturday 05 July 2025 22:53:25 +0000 (0:00:00.189) 0:00:03.032 ********* 2025-07-05 22:53:30.421311 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c) 2025-07-05 22:53:30.421324 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c) 2025-07-05 22:53:30.421335 | orchestrator | 2025-07-05 22:53:30.421346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.421357 | orchestrator | Saturday 05 July 2025 22:53:26 +0000 (0:00:00.422) 0:00:03.455 ********* 2025-07-05 22:53:30.421394 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f) 2025-07-05 22:53:30.421407 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f) 2025-07-05 22:53:30.421418 | orchestrator | 2025-07-05 22:53:30.421429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.421440 | orchestrator | Saturday 05 July 2025 22:53:26 +0000 (0:00:00.402) 0:00:03.857 ********* 2025-07-05 22:53:30.421451 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11) 2025-07-05 22:53:30.421462 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11) 2025-07-05 22:53:30.421474 | orchestrator | 2025-07-05 22:53:30.421485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.421496 | orchestrator | Saturday 05 July 2025 22:53:27 +0000 (0:00:00.647) 0:00:04.504 ********* 2025-07-05 22:53:30.421507 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7) 2025-07-05 22:53:30.421518 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7) 2025-07-05 22:53:30.421540 | orchestrator | 2025-07-05 22:53:30.421551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:30.421562 | orchestrator | Saturday 05 July 2025 22:53:27 +0000 (0:00:00.630) 0:00:05.134 ********* 2025-07-05 22:53:30.421573 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-05 22:53:30.421584 | orchestrator | 2025-07-05 22:53:30.421595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:30.421606 | orchestrator | Saturday 05 July 2025 22:53:28 +0000 (0:00:00.624) 0:00:05.759 ********* 2025-07-05 22:53:30.421617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-05 22:53:30.421628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-05 22:53:30.421639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-05 22:53:30.421650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-05 22:53:30.421660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-05 22:53:30.421671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-05 22:53:30.421682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-05 22:53:30.421693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-05 22:53:30.421704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-05 22:53:30.421715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-05 22:53:30.421726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-05 22:53:30.421736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-05 22:53:30.421747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-05 22:53:30.421758 | orchestrator | 2025-07-05 22:53:30.421769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:30.421780 | orchestrator | Saturday 05 July 2025 22:53:28 +0000 (0:00:00.370) 0:00:06.129 ********* 2025-07-05 22:53:30.421791 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.421802 | orchestrator | 2025-07-05 22:53:30.421813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:30.421824 | orchestrator | Saturday 05 July 2025 22:53:29 +0000 (0:00:00.191) 0:00:06.320 ********* 2025-07-05 22:53:30.421835 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.421846 | orchestrator | 2025-07-05 22:53:30.421857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:30.421868 | orchestrator | Saturday 05 July 2025 22:53:29 +0000 (0:00:00.199) 0:00:06.520 ********* 2025-07-05 22:53:30.421879 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.421890 | orchestrator | 2025-07-05 22:53:30.421901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:30.421912 | orchestrator | Saturday 05 July 2025 22:53:29 +0000 (0:00:00.175) 0:00:06.696 ********* 2025-07-05 22:53:30.421923 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.421933 | orchestrator | 2025-07-05 22:53:30.421945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:30.421956 | orchestrator | Saturday 05 July 2025 22:53:29 +0000 (0:00:00.184) 0:00:06.880 ********* 2025-07-05 22:53:30.421967 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.421978 | orchestrator | 2025-07-05 22:53:30.421989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:30.422000 | orchestrator | Saturday 05 July 2025 22:53:29 +0000 (0:00:00.173) 0:00:07.053 ********* 2025-07-05 22:53:30.422071 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.422086 | orchestrator | 2025-07-05 22:53:30.422097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:30.422108 | orchestrator | Saturday 05 July 2025 22:53:30 +0000 (0:00:00.187) 0:00:07.241 ********* 2025-07-05 22:53:30.422119 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:30.422130 | orchestrator | 2025-07-05 22:53:30.422141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:30.422152 | orchestrator | Saturday 05 July 2025 22:53:30 +0000 (0:00:00.193) 0:00:07.434 ********* 2025-07-05 22:53:30.422171 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.760468 | orchestrator | 2025-07-05 22:53:37.760578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:37.760595 | orchestrator | Saturday 05 July 2025 22:53:30 +0000 (0:00:00.181) 0:00:07.616 ********* 2025-07-05 22:53:37.760607 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-05 22:53:37.760620 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-05 22:53:37.760632 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-05 22:53:37.760643 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-05 22:53:37.760654 | orchestrator | 2025-07-05 22:53:37.760665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:37.760678 | orchestrator | Saturday 05 July 2025 22:53:31 +0000 (0:00:00.876) 0:00:08.492 ********* 2025-07-05 22:53:37.760689 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.760706 | orchestrator | 2025-07-05 22:53:37.760725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:37.760743 | orchestrator | Saturday 05 July 2025 22:53:31 +0000 (0:00:00.173) 0:00:08.666 ********* 2025-07-05 22:53:37.760763 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.760792 | orchestrator | 2025-07-05 22:53:37.760814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:37.760832 | orchestrator | Saturday 05 July 2025 22:53:31 +0000 (0:00:00.180) 0:00:08.847 ********* 2025-07-05 22:53:37.760852 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.760863 | orchestrator | 2025-07-05 22:53:37.760874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:37.760885 | orchestrator | Saturday 05 July 2025 22:53:31 +0000 (0:00:00.193) 0:00:09.041 ********* 2025-07-05 22:53:37.760896 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.760907 | orchestrator | 2025-07-05 22:53:37.760918 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-05 22:53:37.760950 | orchestrator | Saturday 05 July 2025 22:53:32 +0000 (0:00:00.174) 0:00:09.215 ********* 2025-07-05 22:53:37.760962 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.760972 | orchestrator | 2025-07-05 22:53:37.760983 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-05 22:53:37.760995 | orchestrator | Saturday 05 July 2025 22:53:32 +0000 (0:00:00.132) 0:00:09.347 ********* 2025-07-05 22:53:37.761006 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8de564a6-401f-59e2-a445-234b3be175ce'}}) 2025-07-05 22:53:37.761018 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2634d3d6-ac41-59e6-b3da-1ade7ee25156'}}) 2025-07-05 22:53:37.761029 | orchestrator | 2025-07-05 22:53:37.761040 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-05 22:53:37.761051 | orchestrator | Saturday 05 July 2025 22:53:32 +0000 (0:00:00.172) 0:00:09.520 ********* 2025-07-05 22:53:37.761064 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'}) 2025-07-05 22:53:37.761076 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'}) 2025-07-05 22:53:37.761088 | orchestrator | 2025-07-05 22:53:37.761124 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-05 22:53:37.761136 | orchestrator | Saturday 05 July 2025 22:53:34 +0000 (0:00:01.967) 0:00:11.487 ********* 2025-07-05 22:53:37.761147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:37.761159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:37.761170 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.761212 | orchestrator | 2025-07-05 22:53:37.761224 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-05 22:53:37.761235 | orchestrator | Saturday 05 July 2025 22:53:34 +0000 (0:00:00.137) 0:00:11.625 ********* 2025-07-05 22:53:37.761246 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'}) 2025-07-05 22:53:37.761257 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'}) 2025-07-05 22:53:37.761268 | orchestrator | 2025-07-05 22:53:37.761279 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-05 22:53:37.761290 | orchestrator | Saturday 05 July 2025 22:53:35 +0000 (0:00:01.453) 0:00:13.078 ********* 2025-07-05 22:53:37.761301 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:37.761313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:37.761324 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.761335 | orchestrator | 2025-07-05 22:53:37.761346 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-05 22:53:37.761357 | orchestrator | Saturday 05 July 2025 22:53:36 +0000 (0:00:00.141) 0:00:13.220 ********* 2025-07-05 22:53:37.761368 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.761379 | orchestrator | 2025-07-05 22:53:37.761395 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-05 22:53:37.761425 | orchestrator | Saturday 05 July 2025 22:53:36 +0000 (0:00:00.119) 0:00:13.339 ********* 2025-07-05 22:53:37.761437 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:37.761449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:37.761460 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.761471 | orchestrator | 2025-07-05 22:53:37.761482 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-05 22:53:37.761493 | orchestrator | Saturday 05 July 2025 22:53:36 +0000 (0:00:00.272) 0:00:13.612 ********* 2025-07-05 22:53:37.761504 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.761515 | orchestrator | 2025-07-05 22:53:37.761526 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-05 22:53:37.761537 | orchestrator | Saturday 05 July 2025 22:53:36 +0000 (0:00:00.123) 0:00:13.735 ********* 2025-07-05 22:53:37.761548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:37.761559 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:37.761570 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.761581 | orchestrator | 2025-07-05 22:53:37.761592 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-05 22:53:37.761611 | orchestrator | Saturday 05 July 2025 22:53:36 +0000 (0:00:00.147) 0:00:13.883 ********* 2025-07-05 22:53:37.761622 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.761633 | orchestrator | 2025-07-05 22:53:37.761644 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-05 22:53:37.761655 | orchestrator | Saturday 05 July 2025 22:53:36 +0000 (0:00:00.117) 0:00:14.000 ********* 2025-07-05 22:53:37.761666 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:37.761678 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:37.761689 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.761700 | orchestrator | 2025-07-05 22:53:37.761711 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-05 22:53:37.761722 | orchestrator | Saturday 05 July 2025 22:53:36 +0000 (0:00:00.141) 0:00:14.141 ********* 2025-07-05 22:53:37.761733 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:53:37.761744 | orchestrator | 2025-07-05 22:53:37.761755 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-05 22:53:37.761766 | orchestrator | Saturday 05 July 2025 22:53:37 +0000 (0:00:00.121) 0:00:14.263 ********* 2025-07-05 22:53:37.761777 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:37.761788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:37.761799 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.761810 | orchestrator | 2025-07-05 22:53:37.761821 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-05 22:53:37.761832 | orchestrator | Saturday 05 July 2025 22:53:37 +0000 (0:00:00.153) 0:00:14.417 ********* 2025-07-05 22:53:37.761847 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:37.761866 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:37.761884 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.761902 | orchestrator | 2025-07-05 22:53:37.761920 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-05 22:53:37.761938 | orchestrator | Saturday 05 July 2025 22:53:37 +0000 (0:00:00.137) 0:00:14.554 ********* 2025-07-05 22:53:37.761957 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:37.761976 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:37.761995 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.762014 | orchestrator | 2025-07-05 22:53:37.762097 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-05 22:53:37.762108 | orchestrator | Saturday 05 July 2025 22:53:37 +0000 (0:00:00.140) 0:00:14.694 ********* 2025-07-05 22:53:37.762119 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.762130 | orchestrator | 2025-07-05 22:53:37.762141 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-05 22:53:37.762153 | orchestrator | Saturday 05 July 2025 22:53:37 +0000 (0:00:00.130) 0:00:14.825 ********* 2025-07-05 22:53:37.762164 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:37.762199 | orchestrator | 2025-07-05 22:53:37.762223 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-05 22:53:43.619283 | orchestrator | Saturday 05 July 2025 22:53:37 +0000 (0:00:00.133) 0:00:14.958 ********* 2025-07-05 22:53:43.619410 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.619439 | orchestrator | 2025-07-05 22:53:43.619460 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-05 22:53:43.619479 | orchestrator | Saturday 05 July 2025 22:53:37 +0000 (0:00:00.132) 0:00:15.091 ********* 2025-07-05 22:53:43.619498 | orchestrator | ok: [testbed-node-3] => { 2025-07-05 22:53:43.619518 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-05 22:53:43.619538 | orchestrator | } 2025-07-05 22:53:43.619558 | orchestrator | 2025-07-05 22:53:43.619569 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-05 22:53:43.619581 | orchestrator | Saturday 05 July 2025 22:53:38 +0000 (0:00:00.252) 0:00:15.344 ********* 2025-07-05 22:53:43.619592 | orchestrator | ok: [testbed-node-3] => { 2025-07-05 22:53:43.619603 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-05 22:53:43.619614 | orchestrator | } 2025-07-05 22:53:43.619625 | orchestrator | 2025-07-05 22:53:43.619636 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-05 22:53:43.619648 | orchestrator | Saturday 05 July 2025 22:53:38 +0000 (0:00:00.143) 0:00:15.487 ********* 2025-07-05 22:53:43.619659 | orchestrator | ok: [testbed-node-3] => { 2025-07-05 22:53:43.619670 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-05 22:53:43.619681 | orchestrator | } 2025-07-05 22:53:43.619692 | orchestrator | 2025-07-05 22:53:43.619703 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-05 22:53:43.619715 | orchestrator | Saturday 05 July 2025 22:53:38 +0000 (0:00:00.129) 0:00:15.617 ********* 2025-07-05 22:53:43.619725 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:53:43.619737 | orchestrator | 2025-07-05 22:53:43.619750 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-05 22:53:43.619763 | orchestrator | Saturday 05 July 2025 22:53:39 +0000 (0:00:00.629) 0:00:16.246 ********* 2025-07-05 22:53:43.619775 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:53:43.619787 | orchestrator | 2025-07-05 22:53:43.619800 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-05 22:53:43.619813 | orchestrator | Saturday 05 July 2025 22:53:39 +0000 (0:00:00.627) 0:00:16.874 ********* 2025-07-05 22:53:43.619825 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:53:43.619837 | orchestrator | 2025-07-05 22:53:43.619849 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-05 22:53:43.619862 | orchestrator | Saturday 05 July 2025 22:53:40 +0000 (0:00:00.527) 0:00:17.401 ********* 2025-07-05 22:53:43.619874 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:53:43.619888 | orchestrator | 2025-07-05 22:53:43.619908 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-05 22:53:43.619927 | orchestrator | Saturday 05 July 2025 22:53:40 +0000 (0:00:00.143) 0:00:17.545 ********* 2025-07-05 22:53:43.619945 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.619965 | orchestrator | 2025-07-05 22:53:43.619978 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-05 22:53:43.619991 | orchestrator | Saturday 05 July 2025 22:53:40 +0000 (0:00:00.111) 0:00:17.656 ********* 2025-07-05 22:53:43.620003 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620014 | orchestrator | 2025-07-05 22:53:43.620025 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-05 22:53:43.620036 | orchestrator | Saturday 05 July 2025 22:53:40 +0000 (0:00:00.105) 0:00:17.762 ********* 2025-07-05 22:53:43.620047 | orchestrator | ok: [testbed-node-3] => { 2025-07-05 22:53:43.620059 | orchestrator |  "vgs_report": { 2025-07-05 22:53:43.620078 | orchestrator |  "vg": [] 2025-07-05 22:53:43.620097 | orchestrator |  } 2025-07-05 22:53:43.620116 | orchestrator | } 2025-07-05 22:53:43.620136 | orchestrator | 2025-07-05 22:53:43.620155 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-05 22:53:43.620225 | orchestrator | Saturday 05 July 2025 22:53:40 +0000 (0:00:00.130) 0:00:17.892 ********* 2025-07-05 22:53:43.620247 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620265 | orchestrator | 2025-07-05 22:53:43.620304 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-05 22:53:43.620326 | orchestrator | Saturday 05 July 2025 22:53:40 +0000 (0:00:00.125) 0:00:18.018 ********* 2025-07-05 22:53:43.620344 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620362 | orchestrator | 2025-07-05 22:53:43.620380 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-05 22:53:43.620399 | orchestrator | Saturday 05 July 2025 22:53:40 +0000 (0:00:00.119) 0:00:18.138 ********* 2025-07-05 22:53:43.620418 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620437 | orchestrator | 2025-07-05 22:53:43.620455 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-05 22:53:43.620474 | orchestrator | Saturday 05 July 2025 22:53:41 +0000 (0:00:00.256) 0:00:18.395 ********* 2025-07-05 22:53:43.620492 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620511 | orchestrator | 2025-07-05 22:53:43.620531 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-05 22:53:43.620550 | orchestrator | Saturday 05 July 2025 22:53:41 +0000 (0:00:00.130) 0:00:18.526 ********* 2025-07-05 22:53:43.620569 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620581 | orchestrator | 2025-07-05 22:53:43.620592 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-05 22:53:43.620603 | orchestrator | Saturday 05 July 2025 22:53:41 +0000 (0:00:00.133) 0:00:18.659 ********* 2025-07-05 22:53:43.620614 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620624 | orchestrator | 2025-07-05 22:53:43.620635 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-05 22:53:43.620646 | orchestrator | Saturday 05 July 2025 22:53:41 +0000 (0:00:00.130) 0:00:18.790 ********* 2025-07-05 22:53:43.620657 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620668 | orchestrator | 2025-07-05 22:53:43.620678 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-05 22:53:43.620690 | orchestrator | Saturday 05 July 2025 22:53:41 +0000 (0:00:00.133) 0:00:18.923 ********* 2025-07-05 22:53:43.620700 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620711 | orchestrator | 2025-07-05 22:53:43.620732 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-05 22:53:43.620771 | orchestrator | Saturday 05 July 2025 22:53:41 +0000 (0:00:00.133) 0:00:19.057 ********* 2025-07-05 22:53:43.620791 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620810 | orchestrator | 2025-07-05 22:53:43.620830 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-05 22:53:43.620847 | orchestrator | Saturday 05 July 2025 22:53:41 +0000 (0:00:00.144) 0:00:19.201 ********* 2025-07-05 22:53:43.620865 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620885 | orchestrator | 2025-07-05 22:53:43.620903 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-05 22:53:43.620922 | orchestrator | Saturday 05 July 2025 22:53:42 +0000 (0:00:00.122) 0:00:19.323 ********* 2025-07-05 22:53:43.620940 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.620958 | orchestrator | 2025-07-05 22:53:43.620977 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-05 22:53:43.620996 | orchestrator | Saturday 05 July 2025 22:53:42 +0000 (0:00:00.119) 0:00:19.443 ********* 2025-07-05 22:53:43.621015 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.621033 | orchestrator | 2025-07-05 22:53:43.621053 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-05 22:53:43.621065 | orchestrator | Saturday 05 July 2025 22:53:42 +0000 (0:00:00.139) 0:00:19.582 ********* 2025-07-05 22:53:43.621076 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.621086 | orchestrator | 2025-07-05 22:53:43.621097 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-05 22:53:43.621120 | orchestrator | Saturday 05 July 2025 22:53:42 +0000 (0:00:00.122) 0:00:19.705 ********* 2025-07-05 22:53:43.621131 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.621142 | orchestrator | 2025-07-05 22:53:43.621153 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-05 22:53:43.621164 | orchestrator | Saturday 05 July 2025 22:53:42 +0000 (0:00:00.131) 0:00:19.836 ********* 2025-07-05 22:53:43.621207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:43.621228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:43.621247 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.621265 | orchestrator | 2025-07-05 22:53:43.621284 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-05 22:53:43.621303 | orchestrator | Saturday 05 July 2025 22:53:42 +0000 (0:00:00.137) 0:00:19.974 ********* 2025-07-05 22:53:43.621321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:43.621339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:43.621358 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.621376 | orchestrator | 2025-07-05 22:53:43.621392 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-05 22:53:43.621410 | orchestrator | Saturday 05 July 2025 22:53:43 +0000 (0:00:00.262) 0:00:20.236 ********* 2025-07-05 22:53:43.621429 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:43.621446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:43.621465 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.621485 | orchestrator | 2025-07-05 22:53:43.621504 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-05 22:53:43.621523 | orchestrator | Saturday 05 July 2025 22:53:43 +0000 (0:00:00.155) 0:00:20.392 ********* 2025-07-05 22:53:43.621541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:43.621560 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:43.621577 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.621588 | orchestrator | 2025-07-05 22:53:43.621599 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-05 22:53:43.621610 | orchestrator | Saturday 05 July 2025 22:53:43 +0000 (0:00:00.139) 0:00:20.531 ********* 2025-07-05 22:53:43.621621 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:43.621632 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:43.621643 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:43.621654 | orchestrator | 2025-07-05 22:53:43.621665 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-05 22:53:43.621677 | orchestrator | Saturday 05 July 2025 22:53:43 +0000 (0:00:00.140) 0:00:20.672 ********* 2025-07-05 22:53:43.621695 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:43.621730 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:48.770428 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:48.770529 | orchestrator | 2025-07-05 22:53:48.770540 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-05 22:53:48.770550 | orchestrator | Saturday 05 July 2025 22:53:43 +0000 (0:00:00.147) 0:00:20.819 ********* 2025-07-05 22:53:48.770558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:48.770567 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:48.770575 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:48.770582 | orchestrator | 2025-07-05 22:53:48.770590 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-05 22:53:48.770597 | orchestrator | Saturday 05 July 2025 22:53:43 +0000 (0:00:00.140) 0:00:20.960 ********* 2025-07-05 22:53:48.770605 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:48.770612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:48.770620 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:48.770627 | orchestrator | 2025-07-05 22:53:48.770635 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-05 22:53:48.770642 | orchestrator | Saturday 05 July 2025 22:53:43 +0000 (0:00:00.137) 0:00:21.098 ********* 2025-07-05 22:53:48.770650 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:53:48.770658 | orchestrator | 2025-07-05 22:53:48.770665 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-05 22:53:48.770673 | orchestrator | Saturday 05 July 2025 22:53:44 +0000 (0:00:00.569) 0:00:21.668 ********* 2025-07-05 22:53:48.770680 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:53:48.770687 | orchestrator | 2025-07-05 22:53:48.770695 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-05 22:53:48.770702 | orchestrator | Saturday 05 July 2025 22:53:45 +0000 (0:00:00.564) 0:00:22.232 ********* 2025-07-05 22:53:48.770710 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:53:48.770717 | orchestrator | 2025-07-05 22:53:48.770724 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-05 22:53:48.770732 | orchestrator | Saturday 05 July 2025 22:53:45 +0000 (0:00:00.134) 0:00:22.366 ********* 2025-07-05 22:53:48.770739 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'vg_name': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'}) 2025-07-05 22:53:48.770748 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'vg_name': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'}) 2025-07-05 22:53:48.770756 | orchestrator | 2025-07-05 22:53:48.770763 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-05 22:53:48.770770 | orchestrator | Saturday 05 July 2025 22:53:45 +0000 (0:00:00.151) 0:00:22.518 ********* 2025-07-05 22:53:48.770778 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:48.770785 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:48.770793 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:48.770800 | orchestrator | 2025-07-05 22:53:48.770808 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-05 22:53:48.770839 | orchestrator | Saturday 05 July 2025 22:53:45 +0000 (0:00:00.142) 0:00:22.660 ********* 2025-07-05 22:53:48.770847 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:48.770855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:48.770862 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:48.770870 | orchestrator | 2025-07-05 22:53:48.770877 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-05 22:53:48.770885 | orchestrator | Saturday 05 July 2025 22:53:45 +0000 (0:00:00.291) 0:00:22.952 ********* 2025-07-05 22:53:48.770892 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'})  2025-07-05 22:53:48.770899 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'})  2025-07-05 22:53:48.770907 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:53:48.770914 | orchestrator | 2025-07-05 22:53:48.770922 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-05 22:53:48.770929 | orchestrator | Saturday 05 July 2025 22:53:45 +0000 (0:00:00.151) 0:00:23.104 ********* 2025-07-05 22:53:48.770937 | orchestrator | ok: [testbed-node-3] => { 2025-07-05 22:53:48.770944 | orchestrator |  "lvm_report": { 2025-07-05 22:53:48.770953 | orchestrator |  "lv": [ 2025-07-05 22:53:48.770961 | orchestrator |  { 2025-07-05 22:53:48.770983 | orchestrator |  "lv_name": "osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156", 2025-07-05 22:53:48.770993 | orchestrator |  "vg_name": "ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156" 2025-07-05 22:53:48.771001 | orchestrator |  }, 2025-07-05 22:53:48.771009 | orchestrator |  { 2025-07-05 22:53:48.771018 | orchestrator |  "lv_name": "osd-block-8de564a6-401f-59e2-a445-234b3be175ce", 2025-07-05 22:53:48.771026 | orchestrator |  "vg_name": "ceph-8de564a6-401f-59e2-a445-234b3be175ce" 2025-07-05 22:53:48.771034 | orchestrator |  } 2025-07-05 22:53:48.771042 | orchestrator |  ], 2025-07-05 22:53:48.771051 | orchestrator |  "pv": [ 2025-07-05 22:53:48.771059 | orchestrator |  { 2025-07-05 22:53:48.771067 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-05 22:53:48.771075 | orchestrator |  "vg_name": "ceph-8de564a6-401f-59e2-a445-234b3be175ce" 2025-07-05 22:53:48.771083 | orchestrator |  }, 2025-07-05 22:53:48.771091 | orchestrator |  { 2025-07-05 22:53:48.771099 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-05 22:53:48.771108 | orchestrator |  "vg_name": "ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156" 2025-07-05 22:53:48.771116 | orchestrator |  } 2025-07-05 22:53:48.771124 | orchestrator |  ] 2025-07-05 22:53:48.771132 | orchestrator |  } 2025-07-05 22:53:48.771140 | orchestrator | } 2025-07-05 22:53:48.771149 | orchestrator | 2025-07-05 22:53:48.771157 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-05 22:53:48.771200 | orchestrator | 2025-07-05 22:53:48.771224 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-05 22:53:48.771233 | orchestrator | Saturday 05 July 2025 22:53:46 +0000 (0:00:00.266) 0:00:23.370 ********* 2025-07-05 22:53:48.771241 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-05 22:53:48.771250 | orchestrator | 2025-07-05 22:53:48.771258 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-05 22:53:48.771267 | orchestrator | Saturday 05 July 2025 22:53:46 +0000 (0:00:00.239) 0:00:23.610 ********* 2025-07-05 22:53:48.771275 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:53:48.771283 | orchestrator | 2025-07-05 22:53:48.771290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:48.771304 | orchestrator | Saturday 05 July 2025 22:53:46 +0000 (0:00:00.227) 0:00:23.837 ********* 2025-07-05 22:53:48.771312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-05 22:53:48.771319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-05 22:53:48.771326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-05 22:53:48.771334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-05 22:53:48.771341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-05 22:53:48.771349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-05 22:53:48.771356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-05 22:53:48.771363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-05 22:53:48.771371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-05 22:53:48.771378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-05 22:53:48.771385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-05 22:53:48.771393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-05 22:53:48.771400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-05 22:53:48.771408 | orchestrator | 2025-07-05 22:53:48.771415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:48.771423 | orchestrator | Saturday 05 July 2025 22:53:47 +0000 (0:00:00.403) 0:00:24.240 ********* 2025-07-05 22:53:48.771430 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:48.771438 | orchestrator | 2025-07-05 22:53:48.771445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:48.771453 | orchestrator | Saturday 05 July 2025 22:53:47 +0000 (0:00:00.186) 0:00:24.427 ********* 2025-07-05 22:53:48.771460 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:48.771467 | orchestrator | 2025-07-05 22:53:48.771475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:48.771482 | orchestrator | Saturday 05 July 2025 22:53:47 +0000 (0:00:00.170) 0:00:24.597 ********* 2025-07-05 22:53:48.771490 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:48.771497 | orchestrator | 2025-07-05 22:53:48.771505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:48.771512 | orchestrator | Saturday 05 July 2025 22:53:47 +0000 (0:00:00.189) 0:00:24.786 ********* 2025-07-05 22:53:48.771520 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:48.771527 | orchestrator | 2025-07-05 22:53:48.771535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:48.771542 | orchestrator | Saturday 05 July 2025 22:53:48 +0000 (0:00:00.578) 0:00:25.365 ********* 2025-07-05 22:53:48.771549 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:48.771557 | orchestrator | 2025-07-05 22:53:48.771564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:48.771572 | orchestrator | Saturday 05 July 2025 22:53:48 +0000 (0:00:00.207) 0:00:25.572 ********* 2025-07-05 22:53:48.771583 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:48.771590 | orchestrator | 2025-07-05 22:53:48.771598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:48.771605 | orchestrator | Saturday 05 July 2025 22:53:48 +0000 (0:00:00.198) 0:00:25.770 ********* 2025-07-05 22:53:48.771613 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:48.771620 | orchestrator | 2025-07-05 22:53:48.771633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:58.384696 | orchestrator | Saturday 05 July 2025 22:53:48 +0000 (0:00:00.196) 0:00:25.967 ********* 2025-07-05 22:53:58.385503 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.385535 | orchestrator | 2025-07-05 22:53:58.385549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:58.385561 | orchestrator | Saturday 05 July 2025 22:53:48 +0000 (0:00:00.202) 0:00:26.169 ********* 2025-07-05 22:53:58.385573 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189) 2025-07-05 22:53:58.385585 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189) 2025-07-05 22:53:58.385596 | orchestrator | 2025-07-05 22:53:58.385608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:58.385619 | orchestrator | Saturday 05 July 2025 22:53:49 +0000 (0:00:00.438) 0:00:26.608 ********* 2025-07-05 22:53:58.385630 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123) 2025-07-05 22:53:58.385641 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123) 2025-07-05 22:53:58.385652 | orchestrator | 2025-07-05 22:53:58.385663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:58.385674 | orchestrator | Saturday 05 July 2025 22:53:49 +0000 (0:00:00.449) 0:00:27.058 ********* 2025-07-05 22:53:58.385685 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc) 2025-07-05 22:53:58.385696 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc) 2025-07-05 22:53:58.385707 | orchestrator | 2025-07-05 22:53:58.385718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:58.385729 | orchestrator | Saturday 05 July 2025 22:53:50 +0000 (0:00:00.398) 0:00:27.457 ********* 2025-07-05 22:53:58.385740 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b) 2025-07-05 22:53:58.385751 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b) 2025-07-05 22:53:58.385762 | orchestrator | 2025-07-05 22:53:58.385773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:53:58.385784 | orchestrator | Saturday 05 July 2025 22:53:50 +0000 (0:00:00.397) 0:00:27.854 ********* 2025-07-05 22:53:58.385795 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-05 22:53:58.385806 | orchestrator | 2025-07-05 22:53:58.385817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.385828 | orchestrator | Saturday 05 July 2025 22:53:50 +0000 (0:00:00.309) 0:00:28.164 ********* 2025-07-05 22:53:58.385839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-05 22:53:58.385851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-05 22:53:58.385862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-05 22:53:58.385872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-05 22:53:58.385883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-05 22:53:58.385894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-05 22:53:58.385905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-05 22:53:58.385915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-05 22:53:58.385926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-05 22:53:58.385937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-05 22:53:58.385974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-05 22:53:58.385986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-05 22:53:58.385997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-05 22:53:58.386008 | orchestrator | 2025-07-05 22:53:58.386076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386088 | orchestrator | Saturday 05 July 2025 22:53:51 +0000 (0:00:00.507) 0:00:28.671 ********* 2025-07-05 22:53:58.386099 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.386110 | orchestrator | 2025-07-05 22:53:58.386121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386132 | orchestrator | Saturday 05 July 2025 22:53:51 +0000 (0:00:00.201) 0:00:28.872 ********* 2025-07-05 22:53:58.386143 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.386393 | orchestrator | 2025-07-05 22:53:58.386406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386431 | orchestrator | Saturday 05 July 2025 22:53:51 +0000 (0:00:00.214) 0:00:29.087 ********* 2025-07-05 22:53:58.386443 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.386454 | orchestrator | 2025-07-05 22:53:58.386465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386476 | orchestrator | Saturday 05 July 2025 22:53:52 +0000 (0:00:00.195) 0:00:29.282 ********* 2025-07-05 22:53:58.386487 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.386498 | orchestrator | 2025-07-05 22:53:58.386586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386603 | orchestrator | Saturday 05 July 2025 22:53:52 +0000 (0:00:00.184) 0:00:29.467 ********* 2025-07-05 22:53:58.386614 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.386625 | orchestrator | 2025-07-05 22:53:58.386636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386647 | orchestrator | Saturday 05 July 2025 22:53:52 +0000 (0:00:00.183) 0:00:29.651 ********* 2025-07-05 22:53:58.386658 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.386669 | orchestrator | 2025-07-05 22:53:58.386680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386691 | orchestrator | Saturday 05 July 2025 22:53:52 +0000 (0:00:00.184) 0:00:29.835 ********* 2025-07-05 22:53:58.386702 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.386713 | orchestrator | 2025-07-05 22:53:58.386724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386735 | orchestrator | Saturday 05 July 2025 22:53:52 +0000 (0:00:00.182) 0:00:30.018 ********* 2025-07-05 22:53:58.386746 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.386757 | orchestrator | 2025-07-05 22:53:58.386768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386779 | orchestrator | Saturday 05 July 2025 22:53:52 +0000 (0:00:00.184) 0:00:30.202 ********* 2025-07-05 22:53:58.386790 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-05 22:53:58.386801 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-05 22:53:58.386812 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-05 22:53:58.386823 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-05 22:53:58.386834 | orchestrator | 2025-07-05 22:53:58.386845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386856 | orchestrator | Saturday 05 July 2025 22:53:53 +0000 (0:00:00.724) 0:00:30.926 ********* 2025-07-05 22:53:58.386867 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.386878 | orchestrator | 2025-07-05 22:53:58.386889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386935 | orchestrator | Saturday 05 July 2025 22:53:53 +0000 (0:00:00.181) 0:00:31.107 ********* 2025-07-05 22:53:58.386947 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.386974 | orchestrator | 2025-07-05 22:53:58.386985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.386996 | orchestrator | Saturday 05 July 2025 22:53:54 +0000 (0:00:00.172) 0:00:31.280 ********* 2025-07-05 22:53:58.387007 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.387018 | orchestrator | 2025-07-05 22:53:58.387029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:53:58.387040 | orchestrator | Saturday 05 July 2025 22:53:54 +0000 (0:00:00.481) 0:00:31.762 ********* 2025-07-05 22:53:58.387051 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.387062 | orchestrator | 2025-07-05 22:53:58.387073 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-05 22:53:58.387084 | orchestrator | Saturday 05 July 2025 22:53:54 +0000 (0:00:00.183) 0:00:31.946 ********* 2025-07-05 22:53:58.387094 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.387105 | orchestrator | 2025-07-05 22:53:58.387116 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-05 22:53:58.387127 | orchestrator | Saturday 05 July 2025 22:53:54 +0000 (0:00:00.135) 0:00:32.081 ********* 2025-07-05 22:53:58.387138 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b5adb4f-945c-5107-b1d3-f691d6050e0c'}}) 2025-07-05 22:53:58.387179 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '24fdde66-e3ee-586c-8774-3b73abfeacc0'}}) 2025-07-05 22:53:58.387191 | orchestrator | 2025-07-05 22:53:58.387202 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-05 22:53:58.387213 | orchestrator | Saturday 05 July 2025 22:53:55 +0000 (0:00:00.174) 0:00:32.255 ********* 2025-07-05 22:53:58.387225 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'}) 2025-07-05 22:53:58.387238 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'}) 2025-07-05 22:53:58.387248 | orchestrator | 2025-07-05 22:53:58.387259 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-05 22:53:58.387270 | orchestrator | Saturday 05 July 2025 22:53:56 +0000 (0:00:01.796) 0:00:34.052 ********* 2025-07-05 22:53:58.387281 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:53:58.387294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:53:58.387305 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:53:58.387316 | orchestrator | 2025-07-05 22:53:58.387326 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-05 22:53:58.387337 | orchestrator | Saturday 05 July 2025 22:53:56 +0000 (0:00:00.150) 0:00:34.202 ********* 2025-07-05 22:53:58.387348 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'}) 2025-07-05 22:53:58.387360 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'}) 2025-07-05 22:53:58.387370 | orchestrator | 2025-07-05 22:53:58.387390 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-05 22:54:03.947597 | orchestrator | Saturday 05 July 2025 22:53:58 +0000 (0:00:01.377) 0:00:35.580 ********* 2025-07-05 22:54:03.947733 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:03.947759 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:03.947809 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.947829 | orchestrator | 2025-07-05 22:54:03.947847 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-05 22:54:03.947868 | orchestrator | Saturday 05 July 2025 22:53:58 +0000 (0:00:00.156) 0:00:35.737 ********* 2025-07-05 22:54:03.947886 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.947905 | orchestrator | 2025-07-05 22:54:03.947947 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-05 22:54:03.947967 | orchestrator | Saturday 05 July 2025 22:53:58 +0000 (0:00:00.127) 0:00:35.864 ********* 2025-07-05 22:54:03.947986 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:03.948006 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:03.948025 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.948044 | orchestrator | 2025-07-05 22:54:03.948063 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-05 22:54:03.948081 | orchestrator | Saturday 05 July 2025 22:53:58 +0000 (0:00:00.164) 0:00:36.028 ********* 2025-07-05 22:54:03.948101 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.948122 | orchestrator | 2025-07-05 22:54:03.948171 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-05 22:54:03.948194 | orchestrator | Saturday 05 July 2025 22:53:58 +0000 (0:00:00.142) 0:00:36.170 ********* 2025-07-05 22:54:03.948217 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:03.948239 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:03.948263 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.948289 | orchestrator | 2025-07-05 22:54:03.948312 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-05 22:54:03.948338 | orchestrator | Saturday 05 July 2025 22:53:59 +0000 (0:00:00.155) 0:00:36.326 ********* 2025-07-05 22:54:03.948362 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.948384 | orchestrator | 2025-07-05 22:54:03.948408 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-05 22:54:03.948431 | orchestrator | Saturday 05 July 2025 22:53:59 +0000 (0:00:00.346) 0:00:36.672 ********* 2025-07-05 22:54:03.948450 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:03.948468 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:03.948486 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.948504 | orchestrator | 2025-07-05 22:54:03.948522 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-05 22:54:03.948540 | orchestrator | Saturday 05 July 2025 22:53:59 +0000 (0:00:00.157) 0:00:36.830 ********* 2025-07-05 22:54:03.948559 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:54:03.948577 | orchestrator | 2025-07-05 22:54:03.948596 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-05 22:54:03.948613 | orchestrator | Saturday 05 July 2025 22:53:59 +0000 (0:00:00.143) 0:00:36.974 ********* 2025-07-05 22:54:03.948631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:03.948649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:03.948699 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.948719 | orchestrator | 2025-07-05 22:54:03.948738 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-05 22:54:03.948755 | orchestrator | Saturday 05 July 2025 22:53:59 +0000 (0:00:00.158) 0:00:37.132 ********* 2025-07-05 22:54:03.948773 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:03.948803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:03.948823 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.948842 | orchestrator | 2025-07-05 22:54:03.948862 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-05 22:54:03.948881 | orchestrator | Saturday 05 July 2025 22:54:00 +0000 (0:00:00.162) 0:00:37.294 ********* 2025-07-05 22:54:03.948928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:03.948949 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:03.948966 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.948984 | orchestrator | 2025-07-05 22:54:03.949002 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-05 22:54:03.949021 | orchestrator | Saturday 05 July 2025 22:54:00 +0000 (0:00:00.151) 0:00:37.446 ********* 2025-07-05 22:54:03.949040 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.949058 | orchestrator | 2025-07-05 22:54:03.949076 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-05 22:54:03.949094 | orchestrator | Saturday 05 July 2025 22:54:00 +0000 (0:00:00.130) 0:00:37.577 ********* 2025-07-05 22:54:03.949112 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.949130 | orchestrator | 2025-07-05 22:54:03.949179 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-05 22:54:03.949198 | orchestrator | Saturday 05 July 2025 22:54:00 +0000 (0:00:00.134) 0:00:37.711 ********* 2025-07-05 22:54:03.949217 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.949236 | orchestrator | 2025-07-05 22:54:03.949255 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-05 22:54:03.949275 | orchestrator | Saturday 05 July 2025 22:54:00 +0000 (0:00:00.136) 0:00:37.848 ********* 2025-07-05 22:54:03.949293 | orchestrator | ok: [testbed-node-4] => { 2025-07-05 22:54:03.949320 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-05 22:54:03.949342 | orchestrator | } 2025-07-05 22:54:03.949360 | orchestrator | 2025-07-05 22:54:03.949379 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-05 22:54:03.949398 | orchestrator | Saturday 05 July 2025 22:54:00 +0000 (0:00:00.151) 0:00:37.999 ********* 2025-07-05 22:54:03.949414 | orchestrator | ok: [testbed-node-4] => { 2025-07-05 22:54:03.949431 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-05 22:54:03.949449 | orchestrator | } 2025-07-05 22:54:03.949467 | orchestrator | 2025-07-05 22:54:03.949485 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-05 22:54:03.949505 | orchestrator | Saturday 05 July 2025 22:54:00 +0000 (0:00:00.149) 0:00:38.149 ********* 2025-07-05 22:54:03.949525 | orchestrator | ok: [testbed-node-4] => { 2025-07-05 22:54:03.949543 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-05 22:54:03.949561 | orchestrator | } 2025-07-05 22:54:03.949578 | orchestrator | 2025-07-05 22:54:03.949596 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-05 22:54:03.949613 | orchestrator | Saturday 05 July 2025 22:54:01 +0000 (0:00:00.145) 0:00:38.294 ********* 2025-07-05 22:54:03.949631 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:54:03.949649 | orchestrator | 2025-07-05 22:54:03.949688 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-05 22:54:03.949707 | orchestrator | Saturday 05 July 2025 22:54:01 +0000 (0:00:00.720) 0:00:39.015 ********* 2025-07-05 22:54:03.949724 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:54:03.949743 | orchestrator | 2025-07-05 22:54:03.949771 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-05 22:54:03.949790 | orchestrator | Saturday 05 July 2025 22:54:02 +0000 (0:00:00.518) 0:00:39.534 ********* 2025-07-05 22:54:03.949808 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:54:03.949825 | orchestrator | 2025-07-05 22:54:03.949843 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-05 22:54:03.949861 | orchestrator | Saturday 05 July 2025 22:54:02 +0000 (0:00:00.519) 0:00:40.054 ********* 2025-07-05 22:54:03.949879 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:54:03.949897 | orchestrator | 2025-07-05 22:54:03.949915 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-05 22:54:03.949933 | orchestrator | Saturday 05 July 2025 22:54:02 +0000 (0:00:00.148) 0:00:40.202 ********* 2025-07-05 22:54:03.949953 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.950014 | orchestrator | 2025-07-05 22:54:03.950105 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-05 22:54:03.950124 | orchestrator | Saturday 05 July 2025 22:54:03 +0000 (0:00:00.118) 0:00:40.321 ********* 2025-07-05 22:54:03.950280 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.950312 | orchestrator | 2025-07-05 22:54:03.950338 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-05 22:54:03.950356 | orchestrator | Saturday 05 July 2025 22:54:03 +0000 (0:00:00.111) 0:00:40.432 ********* 2025-07-05 22:54:03.950373 | orchestrator | ok: [testbed-node-4] => { 2025-07-05 22:54:03.950392 | orchestrator |  "vgs_report": { 2025-07-05 22:54:03.950420 | orchestrator |  "vg": [] 2025-07-05 22:54:03.950442 | orchestrator |  } 2025-07-05 22:54:03.950460 | orchestrator | } 2025-07-05 22:54:03.950479 | orchestrator | 2025-07-05 22:54:03.950497 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-05 22:54:03.950521 | orchestrator | Saturday 05 July 2025 22:54:03 +0000 (0:00:00.151) 0:00:40.584 ********* 2025-07-05 22:54:03.950546 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.950564 | orchestrator | 2025-07-05 22:54:03.950582 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-05 22:54:03.950601 | orchestrator | Saturday 05 July 2025 22:54:03 +0000 (0:00:00.143) 0:00:40.728 ********* 2025-07-05 22:54:03.950618 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.950642 | orchestrator | 2025-07-05 22:54:03.950668 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-05 22:54:03.950702 | orchestrator | Saturday 05 July 2025 22:54:03 +0000 (0:00:00.141) 0:00:40.870 ********* 2025-07-05 22:54:03.950720 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.950737 | orchestrator | 2025-07-05 22:54:03.950747 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-05 22:54:03.950780 | orchestrator | Saturday 05 July 2025 22:54:03 +0000 (0:00:00.137) 0:00:41.008 ********* 2025-07-05 22:54:03.950790 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:03.950800 | orchestrator | 2025-07-05 22:54:03.950810 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-05 22:54:03.950839 | orchestrator | Saturday 05 July 2025 22:54:03 +0000 (0:00:00.137) 0:00:41.145 ********* 2025-07-05 22:54:08.370579 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.370711 | orchestrator | 2025-07-05 22:54:08.370737 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-05 22:54:08.370760 | orchestrator | Saturday 05 July 2025 22:54:04 +0000 (0:00:00.142) 0:00:41.287 ********* 2025-07-05 22:54:08.370780 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.370801 | orchestrator | 2025-07-05 22:54:08.370821 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-05 22:54:08.370859 | orchestrator | Saturday 05 July 2025 22:54:04 +0000 (0:00:00.357) 0:00:41.645 ********* 2025-07-05 22:54:08.370871 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.370882 | orchestrator | 2025-07-05 22:54:08.370893 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-05 22:54:08.370905 | orchestrator | Saturday 05 July 2025 22:54:04 +0000 (0:00:00.135) 0:00:41.780 ********* 2025-07-05 22:54:08.370916 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.370927 | orchestrator | 2025-07-05 22:54:08.370938 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-05 22:54:08.370949 | orchestrator | Saturday 05 July 2025 22:54:04 +0000 (0:00:00.113) 0:00:41.894 ********* 2025-07-05 22:54:08.370960 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.370971 | orchestrator | 2025-07-05 22:54:08.370983 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-05 22:54:08.370994 | orchestrator | Saturday 05 July 2025 22:54:04 +0000 (0:00:00.126) 0:00:42.020 ********* 2025-07-05 22:54:08.371005 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371015 | orchestrator | 2025-07-05 22:54:08.371027 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-05 22:54:08.371038 | orchestrator | Saturday 05 July 2025 22:54:04 +0000 (0:00:00.123) 0:00:42.143 ********* 2025-07-05 22:54:08.371049 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371060 | orchestrator | 2025-07-05 22:54:08.371071 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-05 22:54:08.371082 | orchestrator | Saturday 05 July 2025 22:54:05 +0000 (0:00:00.126) 0:00:42.270 ********* 2025-07-05 22:54:08.371093 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371109 | orchestrator | 2025-07-05 22:54:08.371129 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-05 22:54:08.371180 | orchestrator | Saturday 05 July 2025 22:54:05 +0000 (0:00:00.119) 0:00:42.389 ********* 2025-07-05 22:54:08.371198 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371213 | orchestrator | 2025-07-05 22:54:08.371225 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-05 22:54:08.371236 | orchestrator | Saturday 05 July 2025 22:54:05 +0000 (0:00:00.120) 0:00:42.510 ********* 2025-07-05 22:54:08.371247 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371257 | orchestrator | 2025-07-05 22:54:08.371269 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-05 22:54:08.371280 | orchestrator | Saturday 05 July 2025 22:54:05 +0000 (0:00:00.127) 0:00:42.637 ********* 2025-07-05 22:54:08.371292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:08.371305 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:08.371316 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371327 | orchestrator | 2025-07-05 22:54:08.371338 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-05 22:54:08.371349 | orchestrator | Saturday 05 July 2025 22:54:05 +0000 (0:00:00.130) 0:00:42.767 ********* 2025-07-05 22:54:08.371360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:08.371371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:08.371383 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371394 | orchestrator | 2025-07-05 22:54:08.371404 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-05 22:54:08.371420 | orchestrator | Saturday 05 July 2025 22:54:05 +0000 (0:00:00.161) 0:00:42.929 ********* 2025-07-05 22:54:08.371438 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:08.371471 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:08.371487 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371499 | orchestrator | 2025-07-05 22:54:08.371510 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-05 22:54:08.371521 | orchestrator | Saturday 05 July 2025 22:54:05 +0000 (0:00:00.147) 0:00:43.076 ********* 2025-07-05 22:54:08.371533 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:08.371544 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:08.371556 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371567 | orchestrator | 2025-07-05 22:54:08.371578 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-05 22:54:08.371611 | orchestrator | Saturday 05 July 2025 22:54:06 +0000 (0:00:00.291) 0:00:43.368 ********* 2025-07-05 22:54:08.371623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:08.371634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:08.371646 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371657 | orchestrator | 2025-07-05 22:54:08.371668 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-05 22:54:08.371679 | orchestrator | Saturday 05 July 2025 22:54:06 +0000 (0:00:00.151) 0:00:43.519 ********* 2025-07-05 22:54:08.371690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:08.371702 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:08.371713 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371724 | orchestrator | 2025-07-05 22:54:08.371735 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-05 22:54:08.371747 | orchestrator | Saturday 05 July 2025 22:54:06 +0000 (0:00:00.148) 0:00:43.667 ********* 2025-07-05 22:54:08.371758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:08.371769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:08.371780 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371791 | orchestrator | 2025-07-05 22:54:08.371803 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-05 22:54:08.371814 | orchestrator | Saturday 05 July 2025 22:54:06 +0000 (0:00:00.142) 0:00:43.810 ********* 2025-07-05 22:54:08.371825 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:08.371836 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:08.371896 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.371909 | orchestrator | 2025-07-05 22:54:08.371920 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-05 22:54:08.371940 | orchestrator | Saturday 05 July 2025 22:54:06 +0000 (0:00:00.134) 0:00:43.945 ********* 2025-07-05 22:54:08.371951 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:54:08.371962 | orchestrator | 2025-07-05 22:54:08.371973 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-05 22:54:08.371984 | orchestrator | Saturday 05 July 2025 22:54:07 +0000 (0:00:00.506) 0:00:44.452 ********* 2025-07-05 22:54:08.371995 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:54:08.372006 | orchestrator | 2025-07-05 22:54:08.372017 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-05 22:54:08.372028 | orchestrator | Saturday 05 July 2025 22:54:07 +0000 (0:00:00.530) 0:00:44.982 ********* 2025-07-05 22:54:08.372039 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:54:08.372050 | orchestrator | 2025-07-05 22:54:08.372061 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-05 22:54:08.372072 | orchestrator | Saturday 05 July 2025 22:54:07 +0000 (0:00:00.139) 0:00:45.121 ********* 2025-07-05 22:54:08.372083 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'vg_name': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'}) 2025-07-05 22:54:08.372096 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'vg_name': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'}) 2025-07-05 22:54:08.372107 | orchestrator | 2025-07-05 22:54:08.372118 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-05 22:54:08.372129 | orchestrator | Saturday 05 July 2025 22:54:08 +0000 (0:00:00.153) 0:00:45.275 ********* 2025-07-05 22:54:08.372171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:08.372183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:08.372194 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:08.372205 | orchestrator | 2025-07-05 22:54:08.372216 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-05 22:54:08.372232 | orchestrator | Saturday 05 July 2025 22:54:08 +0000 (0:00:00.140) 0:00:45.415 ********* 2025-07-05 22:54:08.372243 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:08.372255 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:08.372274 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:13.737745 | orchestrator | 2025-07-05 22:54:13.737855 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-05 22:54:13.737872 | orchestrator | Saturday 05 July 2025 22:54:08 +0000 (0:00:00.154) 0:00:45.570 ********* 2025-07-05 22:54:13.737885 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'})  2025-07-05 22:54:13.737899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'})  2025-07-05 22:54:13.737911 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:13.737924 | orchestrator | 2025-07-05 22:54:13.737935 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-05 22:54:13.737947 | orchestrator | Saturday 05 July 2025 22:54:08 +0000 (0:00:00.139) 0:00:45.709 ********* 2025-07-05 22:54:13.737959 | orchestrator | ok: [testbed-node-4] => { 2025-07-05 22:54:13.737970 | orchestrator |  "lvm_report": { 2025-07-05 22:54:13.737982 | orchestrator |  "lv": [ 2025-07-05 22:54:13.737993 | orchestrator |  { 2025-07-05 22:54:13.738005 | orchestrator |  "lv_name": "osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0", 2025-07-05 22:54:13.738096 | orchestrator |  "vg_name": "ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0" 2025-07-05 22:54:13.738191 | orchestrator |  }, 2025-07-05 22:54:13.738209 | orchestrator |  { 2025-07-05 22:54:13.738228 | orchestrator |  "lv_name": "osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c", 2025-07-05 22:54:13.738249 | orchestrator |  "vg_name": "ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c" 2025-07-05 22:54:13.738268 | orchestrator |  } 2025-07-05 22:54:13.738286 | orchestrator |  ], 2025-07-05 22:54:13.738299 | orchestrator |  "pv": [ 2025-07-05 22:54:13.738311 | orchestrator |  { 2025-07-05 22:54:13.738323 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-05 22:54:13.738335 | orchestrator |  "vg_name": "ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c" 2025-07-05 22:54:13.738348 | orchestrator |  }, 2025-07-05 22:54:13.738360 | orchestrator |  { 2025-07-05 22:54:13.738372 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-05 22:54:13.738384 | orchestrator |  "vg_name": "ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0" 2025-07-05 22:54:13.738396 | orchestrator |  } 2025-07-05 22:54:13.738409 | orchestrator |  ] 2025-07-05 22:54:13.738420 | orchestrator |  } 2025-07-05 22:54:13.738433 | orchestrator | } 2025-07-05 22:54:13.738447 | orchestrator | 2025-07-05 22:54:13.738459 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-05 22:54:13.738471 | orchestrator | 2025-07-05 22:54:13.738484 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-05 22:54:13.738496 | orchestrator | Saturday 05 July 2025 22:54:08 +0000 (0:00:00.395) 0:00:46.105 ********* 2025-07-05 22:54:13.738509 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-05 22:54:13.738521 | orchestrator | 2025-07-05 22:54:13.738534 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-05 22:54:13.738618 | orchestrator | Saturday 05 July 2025 22:54:09 +0000 (0:00:00.228) 0:00:46.334 ********* 2025-07-05 22:54:13.738641 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:13.738653 | orchestrator | 2025-07-05 22:54:13.738664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.738675 | orchestrator | Saturday 05 July 2025 22:54:09 +0000 (0:00:00.212) 0:00:46.547 ********* 2025-07-05 22:54:13.738686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-05 22:54:13.738697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-05 22:54:13.738709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-05 22:54:13.738720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-05 22:54:13.738731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-05 22:54:13.738741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-05 22:54:13.738753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-05 22:54:13.738764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-05 22:54:13.738775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-05 22:54:13.738786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-05 22:54:13.738796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-05 22:54:13.738808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-05 22:54:13.738819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-05 22:54:13.738830 | orchestrator | 2025-07-05 22:54:13.738841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.738877 | orchestrator | Saturday 05 July 2025 22:54:09 +0000 (0:00:00.369) 0:00:46.916 ********* 2025-07-05 22:54:13.738889 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:13.738900 | orchestrator | 2025-07-05 22:54:13.738911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.738922 | orchestrator | Saturday 05 July 2025 22:54:09 +0000 (0:00:00.199) 0:00:47.115 ********* 2025-07-05 22:54:13.738933 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:13.738944 | orchestrator | 2025-07-05 22:54:13.738955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.738989 | orchestrator | Saturday 05 July 2025 22:54:10 +0000 (0:00:00.195) 0:00:47.311 ********* 2025-07-05 22:54:13.739001 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:13.739012 | orchestrator | 2025-07-05 22:54:13.739023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.739034 | orchestrator | Saturday 05 July 2025 22:54:10 +0000 (0:00:00.183) 0:00:47.494 ********* 2025-07-05 22:54:13.739045 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:13.739056 | orchestrator | 2025-07-05 22:54:13.739067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.739078 | orchestrator | Saturday 05 July 2025 22:54:10 +0000 (0:00:00.190) 0:00:47.685 ********* 2025-07-05 22:54:13.739089 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:13.739100 | orchestrator | 2025-07-05 22:54:13.739112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.739150 | orchestrator | Saturday 05 July 2025 22:54:10 +0000 (0:00:00.185) 0:00:47.870 ********* 2025-07-05 22:54:13.739163 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:13.739174 | orchestrator | 2025-07-05 22:54:13.739186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.739197 | orchestrator | Saturday 05 July 2025 22:54:11 +0000 (0:00:00.456) 0:00:48.327 ********* 2025-07-05 22:54:13.739208 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:13.739219 | orchestrator | 2025-07-05 22:54:13.739230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.739241 | orchestrator | Saturday 05 July 2025 22:54:11 +0000 (0:00:00.184) 0:00:48.511 ********* 2025-07-05 22:54:13.739252 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:13.739263 | orchestrator | 2025-07-05 22:54:13.739274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.739285 | orchestrator | Saturday 05 July 2025 22:54:11 +0000 (0:00:00.179) 0:00:48.691 ********* 2025-07-05 22:54:13.739296 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122) 2025-07-05 22:54:13.739308 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122) 2025-07-05 22:54:13.739319 | orchestrator | 2025-07-05 22:54:13.739330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.739341 | orchestrator | Saturday 05 July 2025 22:54:11 +0000 (0:00:00.387) 0:00:49.078 ********* 2025-07-05 22:54:13.739352 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871) 2025-07-05 22:54:13.739363 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871) 2025-07-05 22:54:13.739374 | orchestrator | 2025-07-05 22:54:13.739384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.739395 | orchestrator | Saturday 05 July 2025 22:54:12 +0000 (0:00:00.390) 0:00:49.468 ********* 2025-07-05 22:54:13.739407 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c) 2025-07-05 22:54:13.739418 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c) 2025-07-05 22:54:13.739429 | orchestrator | 2025-07-05 22:54:13.739440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.739458 | orchestrator | Saturday 05 July 2025 22:54:12 +0000 (0:00:00.397) 0:00:49.866 ********* 2025-07-05 22:54:13.739469 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c) 2025-07-05 22:54:13.739480 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c) 2025-07-05 22:54:13.739491 | orchestrator | 2025-07-05 22:54:13.739502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-05 22:54:13.739513 | orchestrator | Saturday 05 July 2025 22:54:13 +0000 (0:00:00.387) 0:00:50.254 ********* 2025-07-05 22:54:13.739524 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-05 22:54:13.739535 | orchestrator | 2025-07-05 22:54:13.739546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:13.739557 | orchestrator | Saturday 05 July 2025 22:54:13 +0000 (0:00:00.302) 0:00:50.557 ********* 2025-07-05 22:54:13.739568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-05 22:54:13.739578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-05 22:54:13.739589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-05 22:54:13.739600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-05 22:54:13.739611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-05 22:54:13.739622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-05 22:54:13.739632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-05 22:54:13.739649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-05 22:54:13.739660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-05 22:54:13.739671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-05 22:54:13.739682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-05 22:54:13.739699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-05 22:54:22.558744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-05 22:54:22.559606 | orchestrator | 2025-07-05 22:54:22.559651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.559672 | orchestrator | Saturday 05 July 2025 22:54:13 +0000 (0:00:00.373) 0:00:50.930 ********* 2025-07-05 22:54:22.559688 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.559706 | orchestrator | 2025-07-05 22:54:22.559725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.559743 | orchestrator | Saturday 05 July 2025 22:54:13 +0000 (0:00:00.180) 0:00:51.110 ********* 2025-07-05 22:54:22.559760 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.559777 | orchestrator | 2025-07-05 22:54:22.559795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.559814 | orchestrator | Saturday 05 July 2025 22:54:14 +0000 (0:00:00.197) 0:00:51.308 ********* 2025-07-05 22:54:22.559832 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.559850 | orchestrator | 2025-07-05 22:54:22.559869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.559887 | orchestrator | Saturday 05 July 2025 22:54:14 +0000 (0:00:00.485) 0:00:51.793 ********* 2025-07-05 22:54:22.559907 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.559926 | orchestrator | 2025-07-05 22:54:22.559946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.559965 | orchestrator | Saturday 05 July 2025 22:54:14 +0000 (0:00:00.185) 0:00:51.979 ********* 2025-07-05 22:54:22.560016 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.560035 | orchestrator | 2025-07-05 22:54:22.560053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.560071 | orchestrator | Saturday 05 July 2025 22:54:14 +0000 (0:00:00.186) 0:00:52.165 ********* 2025-07-05 22:54:22.560088 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.560105 | orchestrator | 2025-07-05 22:54:22.560174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.560210 | orchestrator | Saturday 05 July 2025 22:54:15 +0000 (0:00:00.192) 0:00:52.358 ********* 2025-07-05 22:54:22.560231 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.560250 | orchestrator | 2025-07-05 22:54:22.560269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.560287 | orchestrator | Saturday 05 July 2025 22:54:15 +0000 (0:00:00.182) 0:00:52.541 ********* 2025-07-05 22:54:22.560305 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.560322 | orchestrator | 2025-07-05 22:54:22.560341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.560360 | orchestrator | Saturday 05 July 2025 22:54:15 +0000 (0:00:00.188) 0:00:52.729 ********* 2025-07-05 22:54:22.560378 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-05 22:54:22.560396 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-05 22:54:22.560416 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-05 22:54:22.560432 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-05 22:54:22.560443 | orchestrator | 2025-07-05 22:54:22.560454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.560466 | orchestrator | Saturday 05 July 2025 22:54:16 +0000 (0:00:00.647) 0:00:53.377 ********* 2025-07-05 22:54:22.560477 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.560554 | orchestrator | 2025-07-05 22:54:22.560566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.560577 | orchestrator | Saturday 05 July 2025 22:54:16 +0000 (0:00:00.260) 0:00:53.637 ********* 2025-07-05 22:54:22.560588 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.560599 | orchestrator | 2025-07-05 22:54:22.560611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.560622 | orchestrator | Saturday 05 July 2025 22:54:16 +0000 (0:00:00.203) 0:00:53.840 ********* 2025-07-05 22:54:22.560633 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.560644 | orchestrator | 2025-07-05 22:54:22.560655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-05 22:54:22.560667 | orchestrator | Saturday 05 July 2025 22:54:16 +0000 (0:00:00.196) 0:00:54.037 ********* 2025-07-05 22:54:22.560707 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.560735 | orchestrator | 2025-07-05 22:54:22.560746 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-05 22:54:22.560757 | orchestrator | Saturday 05 July 2025 22:54:17 +0000 (0:00:00.197) 0:00:54.234 ********* 2025-07-05 22:54:22.560768 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.560779 | orchestrator | 2025-07-05 22:54:22.560790 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-05 22:54:22.560801 | orchestrator | Saturday 05 July 2025 22:54:17 +0000 (0:00:00.340) 0:00:54.574 ********* 2025-07-05 22:54:22.560812 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '469f88b0-11f8-5147-93f6-bf0afec867dc'}}) 2025-07-05 22:54:22.560824 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2969909f-2c17-514e-91b3-dec9da8cf58e'}}) 2025-07-05 22:54:22.560835 | orchestrator | 2025-07-05 22:54:22.560846 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-05 22:54:22.560857 | orchestrator | Saturday 05 July 2025 22:54:17 +0000 (0:00:00.190) 0:00:54.764 ********* 2025-07-05 22:54:22.560870 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'}) 2025-07-05 22:54:22.560896 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'}) 2025-07-05 22:54:22.560907 | orchestrator | 2025-07-05 22:54:22.560918 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-05 22:54:22.560956 | orchestrator | Saturday 05 July 2025 22:54:19 +0000 (0:00:01.894) 0:00:56.658 ********* 2025-07-05 22:54:22.560968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:22.560981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:22.560992 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.561003 | orchestrator | 2025-07-05 22:54:22.561014 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-05 22:54:22.561025 | orchestrator | Saturday 05 July 2025 22:54:19 +0000 (0:00:00.161) 0:00:56.820 ********* 2025-07-05 22:54:22.561036 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'}) 2025-07-05 22:54:22.561064 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'}) 2025-07-05 22:54:22.561076 | orchestrator | 2025-07-05 22:54:22.561087 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-05 22:54:22.561098 | orchestrator | Saturday 05 July 2025 22:54:20 +0000 (0:00:01.368) 0:00:58.189 ********* 2025-07-05 22:54:22.561134 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:22.561148 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:22.561159 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.561170 | orchestrator | 2025-07-05 22:54:22.561181 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-05 22:54:22.561191 | orchestrator | Saturday 05 July 2025 22:54:21 +0000 (0:00:00.160) 0:00:58.350 ********* 2025-07-05 22:54:22.561203 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.561213 | orchestrator | 2025-07-05 22:54:22.561225 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-05 22:54:22.561236 | orchestrator | Saturday 05 July 2025 22:54:21 +0000 (0:00:00.139) 0:00:58.489 ********* 2025-07-05 22:54:22.561247 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:22.561258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:22.561269 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.561280 | orchestrator | 2025-07-05 22:54:22.561291 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-05 22:54:22.561302 | orchestrator | Saturday 05 July 2025 22:54:21 +0000 (0:00:00.160) 0:00:58.650 ********* 2025-07-05 22:54:22.561313 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.561324 | orchestrator | 2025-07-05 22:54:22.561340 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-05 22:54:22.561359 | orchestrator | Saturday 05 July 2025 22:54:21 +0000 (0:00:00.134) 0:00:58.785 ********* 2025-07-05 22:54:22.561386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:22.561502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:22.561523 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.561542 | orchestrator | 2025-07-05 22:54:22.561560 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-05 22:54:22.561576 | orchestrator | Saturday 05 July 2025 22:54:21 +0000 (0:00:00.170) 0:00:58.955 ********* 2025-07-05 22:54:22.561587 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.561598 | orchestrator | 2025-07-05 22:54:22.561610 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-05 22:54:22.561621 | orchestrator | Saturday 05 July 2025 22:54:21 +0000 (0:00:00.139) 0:00:59.094 ********* 2025-07-05 22:54:22.561632 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:22.561643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:22.561654 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:22.561665 | orchestrator | 2025-07-05 22:54:22.561685 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-05 22:54:22.561696 | orchestrator | Saturday 05 July 2025 22:54:22 +0000 (0:00:00.145) 0:00:59.240 ********* 2025-07-05 22:54:22.561707 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:22.561719 | orchestrator | 2025-07-05 22:54:22.561730 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-05 22:54:22.561741 | orchestrator | Saturday 05 July 2025 22:54:22 +0000 (0:00:00.138) 0:00:59.379 ********* 2025-07-05 22:54:22.561764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:28.651430 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:28.651570 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.651588 | orchestrator | 2025-07-05 22:54:28.651601 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-05 22:54:28.651614 | orchestrator | Saturday 05 July 2025 22:54:22 +0000 (0:00:00.381) 0:00:59.760 ********* 2025-07-05 22:54:28.651625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:28.651637 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:28.651649 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.651660 | orchestrator | 2025-07-05 22:54:28.651672 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-05 22:54:28.651683 | orchestrator | Saturday 05 July 2025 22:54:22 +0000 (0:00:00.155) 0:00:59.915 ********* 2025-07-05 22:54:28.651695 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:28.651706 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:28.651718 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.651729 | orchestrator | 2025-07-05 22:54:28.651740 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-05 22:54:28.651752 | orchestrator | Saturday 05 July 2025 22:54:22 +0000 (0:00:00.146) 0:01:00.062 ********* 2025-07-05 22:54:28.651763 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.651774 | orchestrator | 2025-07-05 22:54:28.651785 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-05 22:54:28.651819 | orchestrator | Saturday 05 July 2025 22:54:22 +0000 (0:00:00.139) 0:01:00.202 ********* 2025-07-05 22:54:28.651830 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.651842 | orchestrator | 2025-07-05 22:54:28.651853 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-05 22:54:28.651864 | orchestrator | Saturday 05 July 2025 22:54:23 +0000 (0:00:00.153) 0:01:00.355 ********* 2025-07-05 22:54:28.651875 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.651886 | orchestrator | 2025-07-05 22:54:28.651897 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-05 22:54:28.651908 | orchestrator | Saturday 05 July 2025 22:54:23 +0000 (0:00:00.138) 0:01:00.494 ********* 2025-07-05 22:54:28.651919 | orchestrator | ok: [testbed-node-5] => { 2025-07-05 22:54:28.651931 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-05 22:54:28.651942 | orchestrator | } 2025-07-05 22:54:28.651956 | orchestrator | 2025-07-05 22:54:28.651969 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-05 22:54:28.651981 | orchestrator | Saturday 05 July 2025 22:54:23 +0000 (0:00:00.134) 0:01:00.628 ********* 2025-07-05 22:54:28.651994 | orchestrator | ok: [testbed-node-5] => { 2025-07-05 22:54:28.652006 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-05 22:54:28.652019 | orchestrator | } 2025-07-05 22:54:28.652031 | orchestrator | 2025-07-05 22:54:28.652043 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-05 22:54:28.652056 | orchestrator | Saturday 05 July 2025 22:54:23 +0000 (0:00:00.144) 0:01:00.773 ********* 2025-07-05 22:54:28.652068 | orchestrator | ok: [testbed-node-5] => { 2025-07-05 22:54:28.652080 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-05 22:54:28.652093 | orchestrator | } 2025-07-05 22:54:28.652123 | orchestrator | 2025-07-05 22:54:28.652135 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-05 22:54:28.652148 | orchestrator | Saturday 05 July 2025 22:54:23 +0000 (0:00:00.158) 0:01:00.932 ********* 2025-07-05 22:54:28.652161 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:28.652173 | orchestrator | 2025-07-05 22:54:28.652186 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-05 22:54:28.652198 | orchestrator | Saturday 05 July 2025 22:54:24 +0000 (0:00:00.507) 0:01:01.439 ********* 2025-07-05 22:54:28.652210 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:28.652223 | orchestrator | 2025-07-05 22:54:28.652235 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-05 22:54:28.652248 | orchestrator | Saturday 05 July 2025 22:54:24 +0000 (0:00:00.547) 0:01:01.986 ********* 2025-07-05 22:54:28.652260 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:28.652272 | orchestrator | 2025-07-05 22:54:28.652285 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-05 22:54:28.652298 | orchestrator | Saturday 05 July 2025 22:54:25 +0000 (0:00:00.500) 0:01:02.486 ********* 2025-07-05 22:54:28.652309 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:28.652320 | orchestrator | 2025-07-05 22:54:28.652331 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-05 22:54:28.652343 | orchestrator | Saturday 05 July 2025 22:54:25 +0000 (0:00:00.343) 0:01:02.830 ********* 2025-07-05 22:54:28.652354 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.652379 | orchestrator | 2025-07-05 22:54:28.652391 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-05 22:54:28.652402 | orchestrator | Saturday 05 July 2025 22:54:25 +0000 (0:00:00.121) 0:01:02.952 ********* 2025-07-05 22:54:28.652413 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.652424 | orchestrator | 2025-07-05 22:54:28.652435 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-05 22:54:28.652446 | orchestrator | Saturday 05 July 2025 22:54:25 +0000 (0:00:00.118) 0:01:03.071 ********* 2025-07-05 22:54:28.652458 | orchestrator | ok: [testbed-node-5] => { 2025-07-05 22:54:28.652478 | orchestrator |  "vgs_report": { 2025-07-05 22:54:28.652490 | orchestrator |  "vg": [] 2025-07-05 22:54:28.652521 | orchestrator |  } 2025-07-05 22:54:28.652533 | orchestrator | } 2025-07-05 22:54:28.652545 | orchestrator | 2025-07-05 22:54:28.652556 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-05 22:54:28.652567 | orchestrator | Saturday 05 July 2025 22:54:26 +0000 (0:00:00.148) 0:01:03.219 ********* 2025-07-05 22:54:28.652578 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.652589 | orchestrator | 2025-07-05 22:54:28.652601 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-05 22:54:28.652612 | orchestrator | Saturday 05 July 2025 22:54:26 +0000 (0:00:00.141) 0:01:03.361 ********* 2025-07-05 22:54:28.652623 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.652634 | orchestrator | 2025-07-05 22:54:28.652645 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-05 22:54:28.652657 | orchestrator | Saturday 05 July 2025 22:54:26 +0000 (0:00:00.162) 0:01:03.523 ********* 2025-07-05 22:54:28.652668 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.652679 | orchestrator | 2025-07-05 22:54:28.652750 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-05 22:54:28.652765 | orchestrator | Saturday 05 July 2025 22:54:26 +0000 (0:00:00.138) 0:01:03.661 ********* 2025-07-05 22:54:28.652776 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.652787 | orchestrator | 2025-07-05 22:54:28.652798 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-05 22:54:28.652810 | orchestrator | Saturday 05 July 2025 22:54:26 +0000 (0:00:00.135) 0:01:03.796 ********* 2025-07-05 22:54:28.652820 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.652831 | orchestrator | 2025-07-05 22:54:28.652843 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-05 22:54:28.652854 | orchestrator | Saturday 05 July 2025 22:54:26 +0000 (0:00:00.131) 0:01:03.928 ********* 2025-07-05 22:54:28.652865 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.652876 | orchestrator | 2025-07-05 22:54:28.652887 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-05 22:54:28.652898 | orchestrator | Saturday 05 July 2025 22:54:26 +0000 (0:00:00.127) 0:01:04.056 ********* 2025-07-05 22:54:28.652909 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.652920 | orchestrator | 2025-07-05 22:54:28.652931 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-05 22:54:28.652942 | orchestrator | Saturday 05 July 2025 22:54:26 +0000 (0:00:00.143) 0:01:04.199 ********* 2025-07-05 22:54:28.652953 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.652964 | orchestrator | 2025-07-05 22:54:28.652975 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-05 22:54:28.652986 | orchestrator | Saturday 05 July 2025 22:54:27 +0000 (0:00:00.138) 0:01:04.337 ********* 2025-07-05 22:54:28.652997 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.653008 | orchestrator | 2025-07-05 22:54:28.653019 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-05 22:54:28.653030 | orchestrator | Saturday 05 July 2025 22:54:27 +0000 (0:00:00.346) 0:01:04.684 ********* 2025-07-05 22:54:28.653041 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.653051 | orchestrator | 2025-07-05 22:54:28.653062 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-05 22:54:28.653073 | orchestrator | Saturday 05 July 2025 22:54:27 +0000 (0:00:00.132) 0:01:04.817 ********* 2025-07-05 22:54:28.653084 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.653095 | orchestrator | 2025-07-05 22:54:28.653134 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-05 22:54:28.653145 | orchestrator | Saturday 05 July 2025 22:54:27 +0000 (0:00:00.136) 0:01:04.953 ********* 2025-07-05 22:54:28.653156 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.653167 | orchestrator | 2025-07-05 22:54:28.653178 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-05 22:54:28.653202 | orchestrator | Saturday 05 July 2025 22:54:27 +0000 (0:00:00.140) 0:01:05.094 ********* 2025-07-05 22:54:28.653213 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.653224 | orchestrator | 2025-07-05 22:54:28.653235 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-05 22:54:28.653246 | orchestrator | Saturday 05 July 2025 22:54:28 +0000 (0:00:00.138) 0:01:05.232 ********* 2025-07-05 22:54:28.653257 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.653268 | orchestrator | 2025-07-05 22:54:28.653279 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-05 22:54:28.653290 | orchestrator | Saturday 05 July 2025 22:54:28 +0000 (0:00:00.158) 0:01:05.391 ********* 2025-07-05 22:54:28.653302 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:28.653313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:28.653324 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.653335 | orchestrator | 2025-07-05 22:54:28.653346 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-05 22:54:28.653358 | orchestrator | Saturday 05 July 2025 22:54:28 +0000 (0:00:00.154) 0:01:05.545 ********* 2025-07-05 22:54:28.653375 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:28.653386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:28.653397 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:28.653408 | orchestrator | 2025-07-05 22:54:28.653419 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-05 22:54:28.653431 | orchestrator | Saturday 05 July 2025 22:54:28 +0000 (0:00:00.152) 0:01:05.698 ********* 2025-07-05 22:54:28.653450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:31.645746 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:31.645860 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:31.645877 | orchestrator | 2025-07-05 22:54:31.645890 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-05 22:54:31.645903 | orchestrator | Saturday 05 July 2025 22:54:28 +0000 (0:00:00.151) 0:01:05.850 ********* 2025-07-05 22:54:31.645915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:31.645926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:31.645937 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:31.645948 | orchestrator | 2025-07-05 22:54:31.645959 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-05 22:54:31.645970 | orchestrator | Saturday 05 July 2025 22:54:28 +0000 (0:00:00.158) 0:01:06.009 ********* 2025-07-05 22:54:31.645982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:31.645993 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:31.646004 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:31.646159 | orchestrator | 2025-07-05 22:54:31.646204 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-05 22:54:31.646216 | orchestrator | Saturday 05 July 2025 22:54:28 +0000 (0:00:00.151) 0:01:06.160 ********* 2025-07-05 22:54:31.646228 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:31.646240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:31.646251 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:31.646262 | orchestrator | 2025-07-05 22:54:31.646276 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-05 22:54:31.646288 | orchestrator | Saturday 05 July 2025 22:54:29 +0000 (0:00:00.150) 0:01:06.311 ********* 2025-07-05 22:54:31.646300 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:31.646313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:31.646325 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:31.646338 | orchestrator | 2025-07-05 22:54:31.646350 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-05 22:54:31.646362 | orchestrator | Saturday 05 July 2025 22:54:29 +0000 (0:00:00.367) 0:01:06.679 ********* 2025-07-05 22:54:31.646375 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:31.646387 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:31.646400 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:31.646411 | orchestrator | 2025-07-05 22:54:31.646425 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-05 22:54:31.646437 | orchestrator | Saturday 05 July 2025 22:54:29 +0000 (0:00:00.159) 0:01:06.839 ********* 2025-07-05 22:54:31.646449 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:31.646461 | orchestrator | 2025-07-05 22:54:31.646476 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-05 22:54:31.646497 | orchestrator | Saturday 05 July 2025 22:54:30 +0000 (0:00:00.526) 0:01:07.365 ********* 2025-07-05 22:54:31.646516 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:31.646535 | orchestrator | 2025-07-05 22:54:31.646556 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-05 22:54:31.646577 | orchestrator | Saturday 05 July 2025 22:54:30 +0000 (0:00:00.530) 0:01:07.896 ********* 2025-07-05 22:54:31.646598 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:31.646619 | orchestrator | 2025-07-05 22:54:31.646632 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-05 22:54:31.646643 | orchestrator | Saturday 05 July 2025 22:54:30 +0000 (0:00:00.152) 0:01:08.049 ********* 2025-07-05 22:54:31.646655 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'vg_name': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'}) 2025-07-05 22:54:31.646667 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'vg_name': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'}) 2025-07-05 22:54:31.646678 | orchestrator | 2025-07-05 22:54:31.646689 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-05 22:54:31.646701 | orchestrator | Saturday 05 July 2025 22:54:31 +0000 (0:00:00.167) 0:01:08.216 ********* 2025-07-05 22:54:31.646732 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:31.646744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:31.646764 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:31.646775 | orchestrator | 2025-07-05 22:54:31.646786 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-05 22:54:31.646797 | orchestrator | Saturday 05 July 2025 22:54:31 +0000 (0:00:00.162) 0:01:08.379 ********* 2025-07-05 22:54:31.646808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:31.646819 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:31.646830 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:31.646841 | orchestrator | 2025-07-05 22:54:31.646852 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-05 22:54:31.646863 | orchestrator | Saturday 05 July 2025 22:54:31 +0000 (0:00:00.147) 0:01:08.526 ********* 2025-07-05 22:54:31.646874 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'})  2025-07-05 22:54:31.646885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'})  2025-07-05 22:54:31.646896 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:31.646907 | orchestrator | 2025-07-05 22:54:31.646918 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-05 22:54:31.646947 | orchestrator | Saturday 05 July 2025 22:54:31 +0000 (0:00:00.149) 0:01:08.676 ********* 2025-07-05 22:54:31.646959 | orchestrator | ok: [testbed-node-5] => { 2025-07-05 22:54:31.646970 | orchestrator |  "lvm_report": { 2025-07-05 22:54:31.646981 | orchestrator |  "lv": [ 2025-07-05 22:54:31.646993 | orchestrator |  { 2025-07-05 22:54:31.647004 | orchestrator |  "lv_name": "osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e", 2025-07-05 22:54:31.647016 | orchestrator |  "vg_name": "ceph-2969909f-2c17-514e-91b3-dec9da8cf58e" 2025-07-05 22:54:31.647027 | orchestrator |  }, 2025-07-05 22:54:31.647037 | orchestrator |  { 2025-07-05 22:54:31.647048 | orchestrator |  "lv_name": "osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc", 2025-07-05 22:54:31.647059 | orchestrator |  "vg_name": "ceph-469f88b0-11f8-5147-93f6-bf0afec867dc" 2025-07-05 22:54:31.647070 | orchestrator |  } 2025-07-05 22:54:31.647081 | orchestrator |  ], 2025-07-05 22:54:31.647116 | orchestrator |  "pv": [ 2025-07-05 22:54:31.647129 | orchestrator |  { 2025-07-05 22:54:31.647140 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-05 22:54:31.647152 | orchestrator |  "vg_name": "ceph-469f88b0-11f8-5147-93f6-bf0afec867dc" 2025-07-05 22:54:31.647163 | orchestrator |  }, 2025-07-05 22:54:31.647174 | orchestrator |  { 2025-07-05 22:54:31.647185 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-05 22:54:31.647196 | orchestrator |  "vg_name": "ceph-2969909f-2c17-514e-91b3-dec9da8cf58e" 2025-07-05 22:54:31.647207 | orchestrator |  } 2025-07-05 22:54:31.647218 | orchestrator |  ] 2025-07-05 22:54:31.647229 | orchestrator |  } 2025-07-05 22:54:31.647240 | orchestrator | } 2025-07-05 22:54:31.647251 | orchestrator | 2025-07-05 22:54:31.647262 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:54:31.647273 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-05 22:54:31.647285 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-05 22:54:31.647304 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-05 22:54:31.647315 | orchestrator | 2025-07-05 22:54:31.647326 | orchestrator | 2025-07-05 22:54:31.647337 | orchestrator | 2025-07-05 22:54:31.647348 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:54:31.647360 | orchestrator | Saturday 05 July 2025 22:54:31 +0000 (0:00:00.147) 0:01:08.824 ********* 2025-07-05 22:54:31.647371 | orchestrator | =============================================================================== 2025-07-05 22:54:31.647382 | orchestrator | Create block VGs -------------------------------------------------------- 5.66s 2025-07-05 22:54:31.647398 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2025-07-05 22:54:31.647409 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.86s 2025-07-05 22:54:31.647420 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.69s 2025-07-05 22:54:31.647431 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.63s 2025-07-05 22:54:31.647442 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.60s 2025-07-05 22:54:31.647453 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2025-07-05 22:54:31.647464 | orchestrator | Add known partitions to the list of available block devices ------------- 1.25s 2025-07-05 22:54:31.647489 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2025-07-05 22:54:32.003281 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-07-05 22:54:32.003374 | orchestrator | Print LVM report data --------------------------------------------------- 0.81s 2025-07-05 22:54:32.003384 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-07-05 22:54:32.003393 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2025-07-05 22:54:32.003401 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.69s 2025-07-05 22:54:32.003410 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-07-05 22:54:32.003418 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.65s 2025-07-05 22:54:32.003426 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-07-05 22:54:32.003435 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-07-05 22:54:32.003443 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.64s 2025-07-05 22:54:32.003451 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-07-05 22:54:44.210523 | orchestrator | 2025-07-05 22:54:44 | INFO  | Task f87cbd9e-8741-49e2-aa29-af1b757e6332 (facts) was prepared for execution. 2025-07-05 22:54:44.210633 | orchestrator | 2025-07-05 22:54:44 | INFO  | It takes a moment until task f87cbd9e-8741-49e2-aa29-af1b757e6332 (facts) has been started and output is visible here. 2025-07-05 22:54:55.869842 | orchestrator | 2025-07-05 22:54:55.869950 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-05 22:54:55.869967 | orchestrator | 2025-07-05 22:54:55.869979 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-05 22:54:55.869991 | orchestrator | Saturday 05 July 2025 22:54:47 +0000 (0:00:00.243) 0:00:00.243 ********* 2025-07-05 22:54:55.870003 | orchestrator | ok: [testbed-manager] 2025-07-05 22:54:55.870070 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:54:55.870149 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:54:55.870162 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:54:55.870174 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:54:55.870185 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:54:55.870196 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:55.870207 | orchestrator | 2025-07-05 22:54:55.870219 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-05 22:54:55.870260 | orchestrator | Saturday 05 July 2025 22:54:48 +0000 (0:00:01.047) 0:00:01.291 ********* 2025-07-05 22:54:55.870272 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:54:55.870284 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:54:55.870295 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:54:55.870306 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:54:55.870317 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:54:55.870328 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:55.870339 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:55.870350 | orchestrator | 2025-07-05 22:54:55.870362 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-05 22:54:55.870373 | orchestrator | 2025-07-05 22:54:55.870387 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-05 22:54:55.870399 | orchestrator | Saturday 05 July 2025 22:54:50 +0000 (0:00:01.132) 0:00:02.424 ********* 2025-07-05 22:54:55.870412 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:54:55.870425 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:54:55.870437 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:54:55.870449 | orchestrator | ok: [testbed-manager] 2025-07-05 22:54:55.870461 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:54:55.870473 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:54:55.870485 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:54:55.870496 | orchestrator | 2025-07-05 22:54:55.870509 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-05 22:54:55.870521 | orchestrator | 2025-07-05 22:54:55.870533 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-05 22:54:55.870546 | orchestrator | Saturday 05 July 2025 22:54:54 +0000 (0:00:04.977) 0:00:07.401 ********* 2025-07-05 22:54:55.870558 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:54:55.870570 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:54:55.870582 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:54:55.870594 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:54:55.870607 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:54:55.870619 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:54:55.870632 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:54:55.870644 | orchestrator | 2025-07-05 22:54:55.870661 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:54:55.870683 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:54:55.870702 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:54:55.870740 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:54:55.870755 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:54:55.870766 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:54:55.870777 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:54:55.870788 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 22:54:55.870799 | orchestrator | 2025-07-05 22:54:55.870810 | orchestrator | 2025-07-05 22:54:55.870821 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:54:55.870833 | orchestrator | Saturday 05 July 2025 22:54:55 +0000 (0:00:00.546) 0:00:07.948 ********* 2025-07-05 22:54:55.870844 | orchestrator | =============================================================================== 2025-07-05 22:54:55.870867 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.98s 2025-07-05 22:54:55.870878 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.13s 2025-07-05 22:54:55.870889 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.05s 2025-07-05 22:54:55.870900 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-07-05 22:54:56.050448 | orchestrator | 2025-07-05 22:54:56.055648 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jul 5 22:54:56 UTC 2025 2025-07-05 22:54:56.055729 | orchestrator | 2025-07-05 22:54:57.665242 | orchestrator | 2025-07-05 22:54:57 | INFO  | Collection nutshell is prepared for execution 2025-07-05 22:54:57.665351 | orchestrator | 2025-07-05 22:54:57 | INFO  | D [0] - dotfiles 2025-07-05 22:55:07.686938 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [0] - homer 2025-07-05 22:55:07.687049 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [0] - netdata 2025-07-05 22:55:07.687065 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [0] - openstackclient 2025-07-05 22:55:07.687078 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [0] - phpmyadmin 2025-07-05 22:55:07.687329 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [0] - common 2025-07-05 22:55:07.690988 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [1] -- loadbalancer 2025-07-05 22:55:07.691204 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [2] --- opensearch 2025-07-05 22:55:07.691235 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [2] --- mariadb-ng 2025-07-05 22:55:07.691518 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [3] ---- horizon 2025-07-05 22:55:07.691541 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [3] ---- keystone 2025-07-05 22:55:07.691789 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [4] ----- neutron 2025-07-05 22:55:07.691992 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [5] ------ wait-for-nova 2025-07-05 22:55:07.692152 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [5] ------ octavia 2025-07-05 22:55:07.693700 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [4] ----- barbican 2025-07-05 22:55:07.693860 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [4] ----- designate 2025-07-05 22:55:07.693889 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [4] ----- ironic 2025-07-05 22:55:07.694172 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [4] ----- placement 2025-07-05 22:55:07.694197 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [4] ----- magnum 2025-07-05 22:55:07.694981 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [1] -- openvswitch 2025-07-05 22:55:07.695008 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [2] --- ovn 2025-07-05 22:55:07.695312 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [1] -- memcached 2025-07-05 22:55:07.695497 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [1] -- redis 2025-07-05 22:55:07.695516 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [1] -- rabbitmq-ng 2025-07-05 22:55:07.695875 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [0] - kubernetes 2025-07-05 22:55:07.698744 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [1] -- kubeconfig 2025-07-05 22:55:07.698822 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [1] -- copy-kubeconfig 2025-07-05 22:55:07.698837 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [0] - ceph 2025-07-05 22:55:07.701325 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [1] -- ceph-pools 2025-07-05 22:55:07.701371 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [2] --- copy-ceph-keys 2025-07-05 22:55:07.701385 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [3] ---- cephclient 2025-07-05 22:55:07.701405 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-07-05 22:55:07.701450 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [4] ----- wait-for-keystone 2025-07-05 22:55:07.701597 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [5] ------ kolla-ceph-rgw 2025-07-05 22:55:07.701619 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [5] ------ glance 2025-07-05 22:55:07.701637 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [5] ------ cinder 2025-07-05 22:55:07.701650 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [5] ------ nova 2025-07-05 22:55:07.702312 | orchestrator | 2025-07-05 22:55:07 | INFO  | A [4] ----- prometheus 2025-07-05 22:55:07.702385 | orchestrator | 2025-07-05 22:55:07 | INFO  | D [5] ------ grafana 2025-07-05 22:55:07.892242 | orchestrator | 2025-07-05 22:55:07 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-07-05 22:55:07.892343 | orchestrator | 2025-07-05 22:55:07 | INFO  | Tasks are running in the background 2025-07-05 22:55:10.417636 | orchestrator | 2025-07-05 22:55:10 | INFO  | No task IDs specified, wait for all currently running tasks 2025-07-05 22:55:12.544550 | orchestrator | 2025-07-05 22:55:12 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:12.544932 | orchestrator | 2025-07-05 22:55:12 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:12.545796 | orchestrator | 2025-07-05 22:55:12 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:12.546859 | orchestrator | 2025-07-05 22:55:12 | INFO  | Task 6ba4e29a-9e43-4ca8-8346-a32e2ebb31d8 is in state STARTED 2025-07-05 22:55:12.550171 | orchestrator | 2025-07-05 22:55:12 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:12.550815 | orchestrator | 2025-07-05 22:55:12 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:12.551711 | orchestrator | 2025-07-05 22:55:12 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:12.551735 | orchestrator | 2025-07-05 22:55:12 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:15.612321 | orchestrator | 2025-07-05 22:55:15 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:15.612426 | orchestrator | 2025-07-05 22:55:15 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:15.612440 | orchestrator | 2025-07-05 22:55:15 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:15.612452 | orchestrator | 2025-07-05 22:55:15 | INFO  | Task 6ba4e29a-9e43-4ca8-8346-a32e2ebb31d8 is in state STARTED 2025-07-05 22:55:15.612464 | orchestrator | 2025-07-05 22:55:15 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:15.612475 | orchestrator | 2025-07-05 22:55:15 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:15.615222 | orchestrator | 2025-07-05 22:55:15 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:15.615289 | orchestrator | 2025-07-05 22:55:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:18.646519 | orchestrator | 2025-07-05 22:55:18 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:18.647093 | orchestrator | 2025-07-05 22:55:18 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:18.647114 | orchestrator | 2025-07-05 22:55:18 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:18.648005 | orchestrator | 2025-07-05 22:55:18 | INFO  | Task 6ba4e29a-9e43-4ca8-8346-a32e2ebb31d8 is in state STARTED 2025-07-05 22:55:18.648218 | orchestrator | 2025-07-05 22:55:18 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:18.650703 | orchestrator | 2025-07-05 22:55:18 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:18.651153 | orchestrator | 2025-07-05 22:55:18 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:18.651179 | orchestrator | 2025-07-05 22:55:18 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:21.705761 | orchestrator | 2025-07-05 22:55:21 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:21.705864 | orchestrator | 2025-07-05 22:55:21 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:21.710151 | orchestrator | 2025-07-05 22:55:21 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:21.710243 | orchestrator | 2025-07-05 22:55:21 | INFO  | Task 6ba4e29a-9e43-4ca8-8346-a32e2ebb31d8 is in state STARTED 2025-07-05 22:55:21.710279 | orchestrator | 2025-07-05 22:55:21 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:21.711602 | orchestrator | 2025-07-05 22:55:21 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:21.712051 | orchestrator | 2025-07-05 22:55:21 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:21.712103 | orchestrator | 2025-07-05 22:55:21 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:24.756871 | orchestrator | 2025-07-05 22:55:24 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:24.757087 | orchestrator | 2025-07-05 22:55:24 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:24.757124 | orchestrator | 2025-07-05 22:55:24 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:24.759195 | orchestrator | 2025-07-05 22:55:24 | INFO  | Task 6ba4e29a-9e43-4ca8-8346-a32e2ebb31d8 is in state STARTED 2025-07-05 22:55:24.760971 | orchestrator | 2025-07-05 22:55:24 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:24.761263 | orchestrator | 2025-07-05 22:55:24 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:24.761750 | orchestrator | 2025-07-05 22:55:24 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:24.761848 | orchestrator | 2025-07-05 22:55:24 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:27.829029 | orchestrator | 2025-07-05 22:55:27 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:27.829127 | orchestrator | 2025-07-05 22:55:27 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:27.831501 | orchestrator | 2025-07-05 22:55:27 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:27.836649 | orchestrator | 2025-07-05 22:55:27 | INFO  | Task 6ba4e29a-9e43-4ca8-8346-a32e2ebb31d8 is in state STARTED 2025-07-05 22:55:27.837801 | orchestrator | 2025-07-05 22:55:27 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:27.841716 | orchestrator | 2025-07-05 22:55:27 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:27.842814 | orchestrator | 2025-07-05 22:55:27 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:27.843671 | orchestrator | 2025-07-05 22:55:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:30.915819 | orchestrator | 2025-07-05 22:55:30 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:30.924499 | orchestrator | 2025-07-05 22:55:30 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:30.924556 | orchestrator | 2025-07-05 22:55:30 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:30.927192 | orchestrator | 2025-07-05 22:55:30 | INFO  | Task 6ba4e29a-9e43-4ca8-8346-a32e2ebb31d8 is in state STARTED 2025-07-05 22:55:30.936413 | orchestrator | 2025-07-05 22:55:30 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:30.936463 | orchestrator | 2025-07-05 22:55:30 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:30.939098 | orchestrator | 2025-07-05 22:55:30 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:30.939124 | orchestrator | 2025-07-05 22:55:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:33.995606 | orchestrator | 2025-07-05 22:55:33 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:33.996066 | orchestrator | 2025-07-05 22:55:33 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:33.996618 | orchestrator | 2025-07-05 22:55:33 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:33.997408 | orchestrator | 2025-07-05 22:55:33 | INFO  | Task 6ba4e29a-9e43-4ca8-8346-a32e2ebb31d8 is in state STARTED 2025-07-05 22:55:33.997433 | orchestrator | 2025-07-05 22:55:33 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:33.998787 | orchestrator | 2025-07-05 22:55:33 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:33.998812 | orchestrator | 2025-07-05 22:55:33 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:33.998824 | orchestrator | 2025-07-05 22:55:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:37.053813 | orchestrator | 2025-07-05 22:55:37 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:55:37.055654 | orchestrator | 2025-07-05 22:55:37 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:37.057374 | orchestrator | 2025-07-05 22:55:37 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:37.062655 | orchestrator | 2025-07-05 22:55:37 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:37.062937 | orchestrator | 2025-07-05 22:55:37 | INFO  | Task 6ba4e29a-9e43-4ca8-8346-a32e2ebb31d8 is in state SUCCESS 2025-07-05 22:55:37.062953 | orchestrator | 2025-07-05 22:55:37 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:37.062973 | orchestrator | 2025-07-05 22:55:37.062986 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-07-05 22:55:37.063003 | orchestrator | 2025-07-05 22:55:37.063022 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-07-05 22:55:37.063042 | orchestrator | Saturday 05 July 2025 22:55:18 +0000 (0:00:00.294) 0:00:00.294 ********* 2025-07-05 22:55:37.063074 | orchestrator | changed: [testbed-manager] 2025-07-05 22:55:37.063097 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:55:37.063115 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:55:37.063131 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:55:37.063142 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:55:37.063199 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:55:37.063211 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:55:37.063246 | orchestrator | 2025-07-05 22:55:37.063258 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-07-05 22:55:37.063270 | orchestrator | Saturday 05 July 2025 22:55:23 +0000 (0:00:04.215) 0:00:04.509 ********* 2025-07-05 22:55:37.063282 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-05 22:55:37.063293 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-05 22:55:37.063305 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-05 22:55:37.063316 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-05 22:55:37.063327 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-05 22:55:37.063338 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-05 22:55:37.063349 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-05 22:55:37.063360 | orchestrator | 2025-07-05 22:55:37.063371 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-07-05 22:55:37.063382 | orchestrator | Saturday 05 July 2025 22:55:25 +0000 (0:00:02.208) 0:00:06.718 ********* 2025-07-05 22:55:37.063397 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-05 22:55:23.830318', 'end': '2025-07-05 22:55:23.839888', 'delta': '0:00:00.009570', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-05 22:55:37.063419 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-05 22:55:23.714013', 'end': '2025-07-05 22:55:23.720361', 'delta': '0:00:00.006348', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-05 22:55:37.063436 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-05 22:55:23.966463', 'end': '2025-07-05 22:55:23.971782', 'delta': '0:00:00.005319', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-05 22:55:37.063469 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-05 22:55:24.240810', 'end': '2025-07-05 22:55:24.251098', 'delta': '0:00:00.010288', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-05 22:55:37.063490 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-05 22:55:24.349267', 'end': '2025-07-05 22:55:24.357788', 'delta': '0:00:00.008521', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-05 22:55:37.063503 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-05 22:55:24.577188', 'end': '2025-07-05 22:55:24.585890', 'delta': '0:00:00.008702', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-05 22:55:37.063517 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-05 22:55:24.917571', 'end': '2025-07-05 22:55:24.928862', 'delta': '0:00:00.011291', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-05 22:55:37.063530 | orchestrator | 2025-07-05 22:55:37.063542 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-07-05 22:55:37.063556 | orchestrator | Saturday 05 July 2025 22:55:28 +0000 (0:00:03.264) 0:00:09.982 ********* 2025-07-05 22:55:37.063569 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-05 22:55:37.063582 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-05 22:55:37.063594 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-05 22:55:37.063607 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-05 22:55:37.063624 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-05 22:55:37.063637 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-05 22:55:37.063650 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-05 22:55:37.063662 | orchestrator | 2025-07-05 22:55:37.063675 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-07-05 22:55:37.063687 | orchestrator | Saturday 05 July 2025 22:55:30 +0000 (0:00:01.962) 0:00:11.945 ********* 2025-07-05 22:55:37.063712 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-07-05 22:55:37.063725 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-07-05 22:55:37.063737 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-07-05 22:55:37.063749 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-07-05 22:55:37.063762 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-07-05 22:55:37.063774 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-07-05 22:55:37.063787 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-07-05 22:55:37.063816 | orchestrator | 2025-07-05 22:55:37.063829 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:55:37.063850 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:55:37.063864 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:55:37.063876 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:55:37.063887 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:55:37.063898 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:55:37.064023 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:55:37.064039 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:55:37.064054 | orchestrator | 2025-07-05 22:55:37.064074 | orchestrator | 2025-07-05 22:55:37.064093 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:55:37.064113 | orchestrator | Saturday 05 July 2025 22:55:34 +0000 (0:00:04.133) 0:00:16.078 ********* 2025-07-05 22:55:37.064133 | orchestrator | =============================================================================== 2025-07-05 22:55:37.064176 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.22s 2025-07-05 22:55:37.064194 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.13s 2025-07-05 22:55:37.064205 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.26s 2025-07-05 22:55:37.064216 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.21s 2025-07-05 22:55:37.064227 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.96s 2025-07-05 22:55:37.064244 | orchestrator | 2025-07-05 22:55:37 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:37.066983 | orchestrator | 2025-07-05 22:55:37 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:37.067012 | orchestrator | 2025-07-05 22:55:37 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:40.114451 | orchestrator | 2025-07-05 22:55:40 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:55:40.114554 | orchestrator | 2025-07-05 22:55:40 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:40.115349 | orchestrator | 2025-07-05 22:55:40 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:40.118274 | orchestrator | 2025-07-05 22:55:40 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:40.118304 | orchestrator | 2025-07-05 22:55:40 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:40.119296 | orchestrator | 2025-07-05 22:55:40 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:40.119320 | orchestrator | 2025-07-05 22:55:40 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:40.119332 | orchestrator | 2025-07-05 22:55:40 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:43.158535 | orchestrator | 2025-07-05 22:55:43 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:55:43.158787 | orchestrator | 2025-07-05 22:55:43 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:43.161787 | orchestrator | 2025-07-05 22:55:43 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:43.162266 | orchestrator | 2025-07-05 22:55:43 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:43.165538 | orchestrator | 2025-07-05 22:55:43 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:43.167232 | orchestrator | 2025-07-05 22:55:43 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:43.167471 | orchestrator | 2025-07-05 22:55:43 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:43.167498 | orchestrator | 2025-07-05 22:55:43 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:46.212859 | orchestrator | 2025-07-05 22:55:46 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:55:46.212963 | orchestrator | 2025-07-05 22:55:46 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:46.215055 | orchestrator | 2025-07-05 22:55:46 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:46.215082 | orchestrator | 2025-07-05 22:55:46 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:46.220275 | orchestrator | 2025-07-05 22:55:46 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:46.225010 | orchestrator | 2025-07-05 22:55:46 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:46.225094 | orchestrator | 2025-07-05 22:55:46 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:46.225109 | orchestrator | 2025-07-05 22:55:46 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:49.289253 | orchestrator | 2025-07-05 22:55:49 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:55:49.295666 | orchestrator | 2025-07-05 22:55:49 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:49.300918 | orchestrator | 2025-07-05 22:55:49 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:49.304034 | orchestrator | 2025-07-05 22:55:49 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:49.307643 | orchestrator | 2025-07-05 22:55:49 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:49.309561 | orchestrator | 2025-07-05 22:55:49 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:49.313419 | orchestrator | 2025-07-05 22:55:49 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:49.313498 | orchestrator | 2025-07-05 22:55:49 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:52.359082 | orchestrator | 2025-07-05 22:55:52 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:55:52.359300 | orchestrator | 2025-07-05 22:55:52 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:52.359553 | orchestrator | 2025-07-05 22:55:52 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:52.360029 | orchestrator | 2025-07-05 22:55:52 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:52.370771 | orchestrator | 2025-07-05 22:55:52 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:52.370843 | orchestrator | 2025-07-05 22:55:52 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state STARTED 2025-07-05 22:55:52.370853 | orchestrator | 2025-07-05 22:55:52 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:52.370862 | orchestrator | 2025-07-05 22:55:52 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:55.417443 | orchestrator | 2025-07-05 22:55:55 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:55:55.418533 | orchestrator | 2025-07-05 22:55:55 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:55.419620 | orchestrator | 2025-07-05 22:55:55 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:55.420317 | orchestrator | 2025-07-05 22:55:55 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:55.420953 | orchestrator | 2025-07-05 22:55:55 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:55.421447 | orchestrator | 2025-07-05 22:55:55 | INFO  | Task 23222418-1c05-482a-ae74-e6a3e6c4f264 is in state SUCCESS 2025-07-05 22:55:55.422274 | orchestrator | 2025-07-05 22:55:55 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:55.422309 | orchestrator | 2025-07-05 22:55:55 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:55:58.462055 | orchestrator | 2025-07-05 22:55:58 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:55:58.466662 | orchestrator | 2025-07-05 22:55:58 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:55:58.468150 | orchestrator | 2025-07-05 22:55:58 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:55:58.470598 | orchestrator | 2025-07-05 22:55:58 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:55:58.470890 | orchestrator | 2025-07-05 22:55:58 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:55:58.471519 | orchestrator | 2025-07-05 22:55:58 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:55:58.471965 | orchestrator | 2025-07-05 22:55:58 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:01.519594 | orchestrator | 2025-07-05 22:56:01 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:01.523417 | orchestrator | 2025-07-05 22:56:01 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:56:01.523433 | orchestrator | 2025-07-05 22:56:01 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:01.528519 | orchestrator | 2025-07-05 22:56:01 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:01.528531 | orchestrator | 2025-07-05 22:56:01 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:01.529394 | orchestrator | 2025-07-05 22:56:01 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state STARTED 2025-07-05 22:56:01.529426 | orchestrator | 2025-07-05 22:56:01 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:04.582439 | orchestrator | 2025-07-05 22:56:04 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:04.582550 | orchestrator | 2025-07-05 22:56:04 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:56:04.582567 | orchestrator | 2025-07-05 22:56:04 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:04.582579 | orchestrator | 2025-07-05 22:56:04 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:04.582591 | orchestrator | 2025-07-05 22:56:04 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:04.582602 | orchestrator | 2025-07-05 22:56:04 | INFO  | Task 029fdc4d-44b6-494c-ae58-3dc7355ab540 is in state SUCCESS 2025-07-05 22:56:04.582614 | orchestrator | 2025-07-05 22:56:04 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:07.643027 | orchestrator | 2025-07-05 22:56:07 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:07.643131 | orchestrator | 2025-07-05 22:56:07 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:56:07.643710 | orchestrator | 2025-07-05 22:56:07 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:07.646252 | orchestrator | 2025-07-05 22:56:07 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:07.651547 | orchestrator | 2025-07-05 22:56:07 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:07.651641 | orchestrator | 2025-07-05 22:56:07 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:10.702902 | orchestrator | 2025-07-05 22:56:10 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:10.703003 | orchestrator | 2025-07-05 22:56:10 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:56:10.704136 | orchestrator | 2025-07-05 22:56:10 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:10.704261 | orchestrator | 2025-07-05 22:56:10 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:10.706657 | orchestrator | 2025-07-05 22:56:10 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:10.706691 | orchestrator | 2025-07-05 22:56:10 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:13.759410 | orchestrator | 2025-07-05 22:56:13 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:13.764340 | orchestrator | 2025-07-05 22:56:13 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:56:13.767435 | orchestrator | 2025-07-05 22:56:13 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:13.769986 | orchestrator | 2025-07-05 22:56:13 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:13.771553 | orchestrator | 2025-07-05 22:56:13 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:13.771589 | orchestrator | 2025-07-05 22:56:13 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:16.831895 | orchestrator | 2025-07-05 22:56:16 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:16.832008 | orchestrator | 2025-07-05 22:56:16 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:56:16.837358 | orchestrator | 2025-07-05 22:56:16 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:16.841600 | orchestrator | 2025-07-05 22:56:16 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:16.841639 | orchestrator | 2025-07-05 22:56:16 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:16.841652 | orchestrator | 2025-07-05 22:56:16 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:19.910930 | orchestrator | 2025-07-05 22:56:19 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:19.911573 | orchestrator | 2025-07-05 22:56:19 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:56:19.914297 | orchestrator | 2025-07-05 22:56:19 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:19.917880 | orchestrator | 2025-07-05 22:56:19 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:19.921819 | orchestrator | 2025-07-05 22:56:19 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:19.921892 | orchestrator | 2025-07-05 22:56:19 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:22.998133 | orchestrator | 2025-07-05 22:56:22 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:22.999023 | orchestrator | 2025-07-05 22:56:22 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state STARTED 2025-07-05 22:56:23.000890 | orchestrator | 2025-07-05 22:56:22 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:23.002331 | orchestrator | 2025-07-05 22:56:23 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:23.003699 | orchestrator | 2025-07-05 22:56:23 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:23.003739 | orchestrator | 2025-07-05 22:56:23 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:26.047196 | orchestrator | 2025-07-05 22:56:26 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:26.047341 | orchestrator | 2025-07-05 22:56:26 | INFO  | Task eb187a1f-e14e-4c95-923d-50faeaf2bb6a is in state SUCCESS 2025-07-05 22:56:26.048574 | orchestrator | 2025-07-05 22:56:26.048620 | orchestrator | 2025-07-05 22:56:26.048633 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-07-05 22:56:26.048645 | orchestrator | 2025-07-05 22:56:26.048657 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-07-05 22:56:26.048690 | orchestrator | Saturday 05 July 2025 22:55:20 +0000 (0:00:00.914) 0:00:00.914 ********* 2025-07-05 22:56:26.048702 | orchestrator | ok: [testbed-manager] => { 2025-07-05 22:56:26.048716 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-07-05 22:56:26.048729 | orchestrator | } 2025-07-05 22:56:26.048740 | orchestrator | 2025-07-05 22:56:26.048752 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-07-05 22:56:26.048763 | orchestrator | Saturday 05 July 2025 22:55:20 +0000 (0:00:00.470) 0:00:01.384 ********* 2025-07-05 22:56:26.048774 | orchestrator | ok: [testbed-manager] 2025-07-05 22:56:26.048786 | orchestrator | 2025-07-05 22:56:26.048797 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-07-05 22:56:26.048809 | orchestrator | Saturday 05 July 2025 22:55:22 +0000 (0:00:01.460) 0:00:02.845 ********* 2025-07-05 22:56:26.048820 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-07-05 22:56:26.048831 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-07-05 22:56:26.048842 | orchestrator | 2025-07-05 22:56:26.048853 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-07-05 22:56:26.048884 | orchestrator | Saturday 05 July 2025 22:55:23 +0000 (0:00:01.510) 0:00:04.356 ********* 2025-07-05 22:56:26.048896 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.048932 | orchestrator | 2025-07-05 22:56:26.048944 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-07-05 22:56:26.048956 | orchestrator | Saturday 05 July 2025 22:55:25 +0000 (0:00:01.868) 0:00:06.224 ********* 2025-07-05 22:56:26.048967 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.048978 | orchestrator | 2025-07-05 22:56:26.048990 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-07-05 22:56:26.049002 | orchestrator | Saturday 05 July 2025 22:55:26 +0000 (0:00:01.512) 0:00:07.736 ********* 2025-07-05 22:56:26.049013 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-07-05 22:56:26.049025 | orchestrator | ok: [testbed-manager] 2025-07-05 22:56:26.049036 | orchestrator | 2025-07-05 22:56:26.049048 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-07-05 22:56:26.049061 | orchestrator | Saturday 05 July 2025 22:55:51 +0000 (0:00:24.478) 0:00:32.215 ********* 2025-07-05 22:56:26.049072 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.049084 | orchestrator | 2025-07-05 22:56:26.049096 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:56:26.049108 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:56:26.049122 | orchestrator | 2025-07-05 22:56:26.049134 | orchestrator | 2025-07-05 22:56:26.049255 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:56:26.049270 | orchestrator | Saturday 05 July 2025 22:55:53 +0000 (0:00:01.820) 0:00:34.036 ********* 2025-07-05 22:56:26.049283 | orchestrator | =============================================================================== 2025-07-05 22:56:26.049295 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.48s 2025-07-05 22:56:26.049309 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.87s 2025-07-05 22:56:26.049321 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.82s 2025-07-05 22:56:26.049334 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.51s 2025-07-05 22:56:26.049346 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.51s 2025-07-05 22:56:26.049359 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.46s 2025-07-05 22:56:26.049372 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.47s 2025-07-05 22:56:26.049385 | orchestrator | 2025-07-05 22:56:26.049398 | orchestrator | 2025-07-05 22:56:26.049410 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-07-05 22:56:26.049424 | orchestrator | 2025-07-05 22:56:26.049436 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-07-05 22:56:26.049457 | orchestrator | Saturday 05 July 2025 22:55:18 +0000 (0:00:00.366) 0:00:00.366 ********* 2025-07-05 22:56:26.049471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-07-05 22:56:26.049485 | orchestrator | 2025-07-05 22:56:26.049498 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-07-05 22:56:26.049510 | orchestrator | Saturday 05 July 2025 22:55:19 +0000 (0:00:00.548) 0:00:00.914 ********* 2025-07-05 22:56:26.049521 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-07-05 22:56:26.049532 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-07-05 22:56:26.049543 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-07-05 22:56:26.049555 | orchestrator | 2025-07-05 22:56:26.049567 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-07-05 22:56:26.049586 | orchestrator | Saturday 05 July 2025 22:55:21 +0000 (0:00:01.918) 0:00:02.833 ********* 2025-07-05 22:56:26.049597 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.049609 | orchestrator | 2025-07-05 22:56:26.049620 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-07-05 22:56:26.049632 | orchestrator | Saturday 05 July 2025 22:55:23 +0000 (0:00:02.003) 0:00:04.837 ********* 2025-07-05 22:56:26.049656 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-07-05 22:56:26.049668 | orchestrator | ok: [testbed-manager] 2025-07-05 22:56:26.049679 | orchestrator | 2025-07-05 22:56:26.049690 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-07-05 22:56:26.049701 | orchestrator | Saturday 05 July 2025 22:55:57 +0000 (0:00:34.732) 0:00:39.569 ********* 2025-07-05 22:56:26.049712 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.049724 | orchestrator | 2025-07-05 22:56:26.049735 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-07-05 22:56:26.049746 | orchestrator | Saturday 05 July 2025 22:55:58 +0000 (0:00:00.741) 0:00:40.311 ********* 2025-07-05 22:56:26.049757 | orchestrator | ok: [testbed-manager] 2025-07-05 22:56:26.049768 | orchestrator | 2025-07-05 22:56:26.049780 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-07-05 22:56:26.049791 | orchestrator | Saturday 05 July 2025 22:55:59 +0000 (0:00:00.635) 0:00:40.946 ********* 2025-07-05 22:56:26.049802 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.049813 | orchestrator | 2025-07-05 22:56:26.049825 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-07-05 22:56:26.049836 | orchestrator | Saturday 05 July 2025 22:56:00 +0000 (0:00:01.721) 0:00:42.668 ********* 2025-07-05 22:56:26.049847 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.049858 | orchestrator | 2025-07-05 22:56:26.049870 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-07-05 22:56:26.049881 | orchestrator | Saturday 05 July 2025 22:56:01 +0000 (0:00:00.804) 0:00:43.472 ********* 2025-07-05 22:56:26.049892 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.049903 | orchestrator | 2025-07-05 22:56:26.049914 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-07-05 22:56:26.049926 | orchestrator | Saturday 05 July 2025 22:56:02 +0000 (0:00:00.744) 0:00:44.216 ********* 2025-07-05 22:56:26.049937 | orchestrator | ok: [testbed-manager] 2025-07-05 22:56:26.049948 | orchestrator | 2025-07-05 22:56:26.049959 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:56:26.049971 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:56:26.049982 | orchestrator | 2025-07-05 22:56:26.049993 | orchestrator | 2025-07-05 22:56:26.050004 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:56:26.050101 | orchestrator | Saturday 05 July 2025 22:56:03 +0000 (0:00:00.487) 0:00:44.703 ********* 2025-07-05 22:56:26.050127 | orchestrator | =============================================================================== 2025-07-05 22:56:26.050152 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.73s 2025-07-05 22:56:26.050177 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.00s 2025-07-05 22:56:26.050197 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.92s 2025-07-05 22:56:26.050215 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.72s 2025-07-05 22:56:26.050290 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.80s 2025-07-05 22:56:26.050310 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.74s 2025-07-05 22:56:26.050329 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.74s 2025-07-05 22:56:26.050346 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.64s 2025-07-05 22:56:26.050379 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.55s 2025-07-05 22:56:26.050391 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.49s 2025-07-05 22:56:26.050402 | orchestrator | 2025-07-05 22:56:26.050413 | orchestrator | 2025-07-05 22:56:26.050424 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 22:56:26.050436 | orchestrator | 2025-07-05 22:56:26.050447 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 22:56:26.050465 | orchestrator | Saturday 05 July 2025 22:55:19 +0000 (0:00:00.910) 0:00:00.910 ********* 2025-07-05 22:56:26.050488 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-07-05 22:56:26.050512 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-07-05 22:56:26.050530 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-07-05 22:56:26.050547 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-07-05 22:56:26.050573 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-07-05 22:56:26.050594 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-07-05 22:56:26.050612 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-07-05 22:56:26.050632 | orchestrator | 2025-07-05 22:56:26.050645 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-07-05 22:56:26.050656 | orchestrator | 2025-07-05 22:56:26.050667 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-07-05 22:56:26.050679 | orchestrator | Saturday 05 July 2025 22:55:21 +0000 (0:00:01.953) 0:00:02.863 ********* 2025-07-05 22:56:26.050703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:56:26.050717 | orchestrator | 2025-07-05 22:56:26.050728 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-07-05 22:56:26.050739 | orchestrator | Saturday 05 July 2025 22:55:24 +0000 (0:00:02.489) 0:00:05.353 ********* 2025-07-05 22:56:26.050750 | orchestrator | ok: [testbed-manager] 2025-07-05 22:56:26.050762 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:56:26.050773 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:56:26.050784 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:56:26.050795 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:56:26.050817 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:56:26.050829 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:56:26.050840 | orchestrator | 2025-07-05 22:56:26.050851 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-07-05 22:56:26.050862 | orchestrator | Saturday 05 July 2025 22:55:26 +0000 (0:00:02.312) 0:00:07.665 ********* 2025-07-05 22:56:26.050873 | orchestrator | ok: [testbed-manager] 2025-07-05 22:56:26.050884 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:56:26.050895 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:56:26.050906 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:56:26.050917 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:56:26.050928 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:56:26.050938 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:56:26.050950 | orchestrator | 2025-07-05 22:56:26.050961 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-07-05 22:56:26.050972 | orchestrator | Saturday 05 July 2025 22:55:30 +0000 (0:00:04.046) 0:00:11.712 ********* 2025-07-05 22:56:26.050983 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.050994 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:56:26.051005 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:56:26.051016 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:56:26.051027 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:56:26.051038 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:56:26.051049 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:56:26.051069 | orchestrator | 2025-07-05 22:56:26.051081 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-07-05 22:56:26.051092 | orchestrator | Saturday 05 July 2025 22:55:33 +0000 (0:00:02.963) 0:00:14.676 ********* 2025-07-05 22:56:26.051103 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.051114 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:56:26.051125 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:56:26.051136 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:56:26.051147 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:56:26.051158 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:56:26.051169 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:56:26.051180 | orchestrator | 2025-07-05 22:56:26.051191 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-07-05 22:56:26.051202 | orchestrator | Saturday 05 July 2025 22:55:44 +0000 (0:00:10.627) 0:00:25.304 ********* 2025-07-05 22:56:26.051213 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.051284 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:56:26.051296 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:56:26.051307 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:56:26.051318 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:56:26.051329 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:56:26.051340 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:56:26.051351 | orchestrator | 2025-07-05 22:56:26.051362 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-07-05 22:56:26.051372 | orchestrator | Saturday 05 July 2025 22:56:01 +0000 (0:00:17.083) 0:00:42.387 ********* 2025-07-05 22:56:26.051383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:56:26.051395 | orchestrator | 2025-07-05 22:56:26.051405 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-07-05 22:56:26.051414 | orchestrator | Saturday 05 July 2025 22:56:03 +0000 (0:00:02.023) 0:00:44.410 ********* 2025-07-05 22:56:26.051424 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-07-05 22:56:26.051434 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-07-05 22:56:26.051444 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-07-05 22:56:26.051454 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-07-05 22:56:26.051463 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-07-05 22:56:26.051473 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-07-05 22:56:26.051483 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-07-05 22:56:26.051493 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-07-05 22:56:26.051502 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-07-05 22:56:26.051512 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-07-05 22:56:26.051522 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-07-05 22:56:26.051532 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-07-05 22:56:26.051541 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-07-05 22:56:26.051556 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-07-05 22:56:26.051566 | orchestrator | 2025-07-05 22:56:26.051577 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-07-05 22:56:26.051587 | orchestrator | Saturday 05 July 2025 22:56:08 +0000 (0:00:04.947) 0:00:49.358 ********* 2025-07-05 22:56:26.051596 | orchestrator | ok: [testbed-manager] 2025-07-05 22:56:26.051606 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:56:26.051616 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:56:26.051626 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:56:26.051635 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:56:26.051645 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:56:26.051655 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:56:26.051670 | orchestrator | 2025-07-05 22:56:26.051680 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-07-05 22:56:26.051690 | orchestrator | Saturday 05 July 2025 22:56:09 +0000 (0:00:01.520) 0:00:50.878 ********* 2025-07-05 22:56:26.051700 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.051710 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:56:26.051720 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:56:26.051730 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:56:26.051740 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:56:26.051749 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:56:26.051759 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:56:26.051769 | orchestrator | 2025-07-05 22:56:26.051779 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-07-05 22:56:26.051796 | orchestrator | Saturday 05 July 2025 22:56:11 +0000 (0:00:01.858) 0:00:52.736 ********* 2025-07-05 22:56:26.051806 | orchestrator | ok: [testbed-manager] 2025-07-05 22:56:26.051818 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:56:26.051833 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:56:26.051849 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:56:26.051865 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:56:26.051880 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:56:26.051896 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:56:26.051912 | orchestrator | 2025-07-05 22:56:26.051924 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-07-05 22:56:26.051934 | orchestrator | Saturday 05 July 2025 22:56:13 +0000 (0:00:02.005) 0:00:54.742 ********* 2025-07-05 22:56:26.051944 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:56:26.051953 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:56:26.051963 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:56:26.051973 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:56:26.051982 | orchestrator | ok: [testbed-manager] 2025-07-05 22:56:26.051992 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:56:26.052001 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:56:26.052011 | orchestrator | 2025-07-05 22:56:26.052021 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-07-05 22:56:26.052031 | orchestrator | Saturday 05 July 2025 22:56:15 +0000 (0:00:02.250) 0:00:56.992 ********* 2025-07-05 22:56:26.052041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-07-05 22:56:26.052052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:56:26.052063 | orchestrator | 2025-07-05 22:56:26.052073 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-07-05 22:56:26.052082 | orchestrator | Saturday 05 July 2025 22:56:17 +0000 (0:00:01.722) 0:00:58.715 ********* 2025-07-05 22:56:26.052092 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.052102 | orchestrator | 2025-07-05 22:56:26.052112 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-07-05 22:56:26.052122 | orchestrator | Saturday 05 July 2025 22:56:20 +0000 (0:00:03.208) 0:01:01.923 ********* 2025-07-05 22:56:26.052131 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:56:26.052141 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:56:26.052151 | orchestrator | changed: [testbed-manager] 2025-07-05 22:56:26.052161 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:56:26.052170 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:56:26.052180 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:56:26.052190 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:56:26.052199 | orchestrator | 2025-07-05 22:56:26.052209 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:56:26.052241 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:56:26.052271 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:56:26.052288 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:56:26.052305 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:56:26.052316 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:56:26.052326 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:56:26.052336 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:56:26.052346 | orchestrator | 2025-07-05 22:56:26.052355 | orchestrator | 2025-07-05 22:56:26.052365 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:56:26.052375 | orchestrator | Saturday 05 July 2025 22:56:24 +0000 (0:00:04.026) 0:01:05.950 ********* 2025-07-05 22:56:26.052391 | orchestrator | =============================================================================== 2025-07-05 22:56:26.052400 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.08s 2025-07-05 22:56:26.052410 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.63s 2025-07-05 22:56:26.052420 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.95s 2025-07-05 22:56:26.052430 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.05s 2025-07-05 22:56:26.052439 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 4.03s 2025-07-05 22:56:26.052449 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.21s 2025-07-05 22:56:26.052459 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.96s 2025-07-05 22:56:26.052468 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.49s 2025-07-05 22:56:26.052478 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.31s 2025-07-05 22:56:26.052488 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.25s 2025-07-05 22:56:26.052497 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.02s 2025-07-05 22:56:26.052514 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.01s 2025-07-05 22:56:26.052525 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.95s 2025-07-05 22:56:26.052535 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.86s 2025-07-05 22:56:26.052544 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.72s 2025-07-05 22:56:26.052554 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.52s 2025-07-05 22:56:26.052564 | orchestrator | 2025-07-05 22:56:26 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:26.052574 | orchestrator | 2025-07-05 22:56:26 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:26.052584 | orchestrator | 2025-07-05 22:56:26 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:26.052594 | orchestrator | 2025-07-05 22:56:26 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:29.090700 | orchestrator | 2025-07-05 22:56:29 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:29.091877 | orchestrator | 2025-07-05 22:56:29 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:29.092964 | orchestrator | 2025-07-05 22:56:29 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:29.094500 | orchestrator | 2025-07-05 22:56:29 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:29.094586 | orchestrator | 2025-07-05 22:56:29 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:32.141884 | orchestrator | 2025-07-05 22:56:32 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:32.143387 | orchestrator | 2025-07-05 22:56:32 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:32.145017 | orchestrator | 2025-07-05 22:56:32 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:32.146888 | orchestrator | 2025-07-05 22:56:32 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:32.146923 | orchestrator | 2025-07-05 22:56:32 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:35.192122 | orchestrator | 2025-07-05 22:56:35 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:35.193393 | orchestrator | 2025-07-05 22:56:35 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:35.193800 | orchestrator | 2025-07-05 22:56:35 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:35.196683 | orchestrator | 2025-07-05 22:56:35 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:35.196720 | orchestrator | 2025-07-05 22:56:35 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:38.237175 | orchestrator | 2025-07-05 22:56:38 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:38.239640 | orchestrator | 2025-07-05 22:56:38 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:38.242739 | orchestrator | 2025-07-05 22:56:38 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:38.245319 | orchestrator | 2025-07-05 22:56:38 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:38.245630 | orchestrator | 2025-07-05 22:56:38 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:41.290117 | orchestrator | 2025-07-05 22:56:41 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:41.290382 | orchestrator | 2025-07-05 22:56:41 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:41.291555 | orchestrator | 2025-07-05 22:56:41 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:41.292839 | orchestrator | 2025-07-05 22:56:41 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:41.292864 | orchestrator | 2025-07-05 22:56:41 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:44.346005 | orchestrator | 2025-07-05 22:56:44 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state STARTED 2025-07-05 22:56:44.347433 | orchestrator | 2025-07-05 22:56:44 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:44.349811 | orchestrator | 2025-07-05 22:56:44 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:44.352029 | orchestrator | 2025-07-05 22:56:44 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:44.353697 | orchestrator | 2025-07-05 22:56:44 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:47.401123 | orchestrator | 2025-07-05 22:56:47 | INFO  | Task f5860b1f-0e14-4d4f-98ec-1e05490809ca is in state SUCCESS 2025-07-05 22:56:47.403151 | orchestrator | 2025-07-05 22:56:47 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:47.404764 | orchestrator | 2025-07-05 22:56:47 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:47.406540 | orchestrator | 2025-07-05 22:56:47 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:47.406565 | orchestrator | 2025-07-05 22:56:47 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:50.448019 | orchestrator | 2025-07-05 22:56:50 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:50.448446 | orchestrator | 2025-07-05 22:56:50 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:50.448912 | orchestrator | 2025-07-05 22:56:50 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:50.448941 | orchestrator | 2025-07-05 22:56:50 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:53.496888 | orchestrator | 2025-07-05 22:56:53 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:53.497477 | orchestrator | 2025-07-05 22:56:53 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:53.498535 | orchestrator | 2025-07-05 22:56:53 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:53.498579 | orchestrator | 2025-07-05 22:56:53 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:56.546693 | orchestrator | 2025-07-05 22:56:56 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:56.547627 | orchestrator | 2025-07-05 22:56:56 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:56.549508 | orchestrator | 2025-07-05 22:56:56 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:56.549531 | orchestrator | 2025-07-05 22:56:56 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:56:59.591356 | orchestrator | 2025-07-05 22:56:59 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:56:59.591971 | orchestrator | 2025-07-05 22:56:59 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:56:59.593815 | orchestrator | 2025-07-05 22:56:59 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:56:59.593847 | orchestrator | 2025-07-05 22:56:59 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:02.640641 | orchestrator | 2025-07-05 22:57:02 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:02.642435 | orchestrator | 2025-07-05 22:57:02 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:02.644217 | orchestrator | 2025-07-05 22:57:02 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:02.644243 | orchestrator | 2025-07-05 22:57:02 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:05.699114 | orchestrator | 2025-07-05 22:57:05 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:05.699663 | orchestrator | 2025-07-05 22:57:05 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:05.701085 | orchestrator | 2025-07-05 22:57:05 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:05.701169 | orchestrator | 2025-07-05 22:57:05 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:08.749202 | orchestrator | 2025-07-05 22:57:08 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:08.750622 | orchestrator | 2025-07-05 22:57:08 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:08.752600 | orchestrator | 2025-07-05 22:57:08 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:08.752931 | orchestrator | 2025-07-05 22:57:08 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:11.788994 | orchestrator | 2025-07-05 22:57:11 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:11.789448 | orchestrator | 2025-07-05 22:57:11 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:11.790789 | orchestrator | 2025-07-05 22:57:11 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:11.790838 | orchestrator | 2025-07-05 22:57:11 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:14.840807 | orchestrator | 2025-07-05 22:57:14 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:14.841261 | orchestrator | 2025-07-05 22:57:14 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:14.842638 | orchestrator | 2025-07-05 22:57:14 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:14.842775 | orchestrator | 2025-07-05 22:57:14 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:17.890283 | orchestrator | 2025-07-05 22:57:17 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:17.890960 | orchestrator | 2025-07-05 22:57:17 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:17.892259 | orchestrator | 2025-07-05 22:57:17 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:17.892320 | orchestrator | 2025-07-05 22:57:17 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:20.928775 | orchestrator | 2025-07-05 22:57:20 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:20.931382 | orchestrator | 2025-07-05 22:57:20 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:20.931688 | orchestrator | 2025-07-05 22:57:20 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:20.931716 | orchestrator | 2025-07-05 22:57:20 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:23.971950 | orchestrator | 2025-07-05 22:57:23 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:23.972688 | orchestrator | 2025-07-05 22:57:23 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:23.973719 | orchestrator | 2025-07-05 22:57:23 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:23.973749 | orchestrator | 2025-07-05 22:57:23 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:27.026511 | orchestrator | 2025-07-05 22:57:27 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:27.026626 | orchestrator | 2025-07-05 22:57:27 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:27.027815 | orchestrator | 2025-07-05 22:57:27 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:27.027864 | orchestrator | 2025-07-05 22:57:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:30.081103 | orchestrator | 2025-07-05 22:57:30 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:30.082800 | orchestrator | 2025-07-05 22:57:30 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:30.084859 | orchestrator | 2025-07-05 22:57:30 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:30.084906 | orchestrator | 2025-07-05 22:57:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:33.133296 | orchestrator | 2025-07-05 22:57:33 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:33.134211 | orchestrator | 2025-07-05 22:57:33 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:33.135864 | orchestrator | 2025-07-05 22:57:33 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:33.135990 | orchestrator | 2025-07-05 22:57:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:36.184097 | orchestrator | 2025-07-05 22:57:36 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:36.186067 | orchestrator | 2025-07-05 22:57:36 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:36.188087 | orchestrator | 2025-07-05 22:57:36 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:36.188129 | orchestrator | 2025-07-05 22:57:36 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:39.226413 | orchestrator | 2025-07-05 22:57:39 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:39.226614 | orchestrator | 2025-07-05 22:57:39 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:39.226636 | orchestrator | 2025-07-05 22:57:39 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:39.226660 | orchestrator | 2025-07-05 22:57:39 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:42.268912 | orchestrator | 2025-07-05 22:57:42 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:42.270643 | orchestrator | 2025-07-05 22:57:42 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:42.272771 | orchestrator | 2025-07-05 22:57:42 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:42.272823 | orchestrator | 2025-07-05 22:57:42 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:45.315478 | orchestrator | 2025-07-05 22:57:45 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:45.316919 | orchestrator | 2025-07-05 22:57:45 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:45.319110 | orchestrator | 2025-07-05 22:57:45 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:45.319845 | orchestrator | 2025-07-05 22:57:45 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:48.361279 | orchestrator | 2025-07-05 22:57:48 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state STARTED 2025-07-05 22:57:48.362959 | orchestrator | 2025-07-05 22:57:48 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:48.366634 | orchestrator | 2025-07-05 22:57:48 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:48.366661 | orchestrator | 2025-07-05 22:57:48 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:51.409684 | orchestrator | 2025-07-05 22:57:51 | INFO  | Task d937c3da-0a44-408c-88b7-70d4fdecc961 is in state SUCCESS 2025-07-05 22:57:51.413194 | orchestrator | 2025-07-05 22:57:51.413277 | orchestrator | 2025-07-05 22:57:51.413291 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-07-05 22:57:51.413304 | orchestrator | 2025-07-05 22:57:51.413416 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-07-05 22:57:51.413430 | orchestrator | Saturday 05 July 2025 22:55:40 +0000 (0:00:00.278) 0:00:00.278 ********* 2025-07-05 22:57:51.413443 | orchestrator | ok: [testbed-manager] 2025-07-05 22:57:51.413455 | orchestrator | 2025-07-05 22:57:51.413468 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-07-05 22:57:51.413480 | orchestrator | Saturday 05 July 2025 22:55:41 +0000 (0:00:00.779) 0:00:01.058 ********* 2025-07-05 22:57:51.413492 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-07-05 22:57:51.413504 | orchestrator | 2025-07-05 22:57:51.413516 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-07-05 22:57:51.413527 | orchestrator | Saturday 05 July 2025 22:55:41 +0000 (0:00:00.584) 0:00:01.642 ********* 2025-07-05 22:57:51.413539 | orchestrator | changed: [testbed-manager] 2025-07-05 22:57:51.413587 | orchestrator | 2025-07-05 22:57:51.413600 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-07-05 22:57:51.413612 | orchestrator | Saturday 05 July 2025 22:55:43 +0000 (0:00:01.216) 0:00:02.859 ********* 2025-07-05 22:57:51.413623 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-07-05 22:57:51.413635 | orchestrator | ok: [testbed-manager] 2025-07-05 22:57:51.413647 | orchestrator | 2025-07-05 22:57:51.413658 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-07-05 22:57:51.413670 | orchestrator | Saturday 05 July 2025 22:56:40 +0000 (0:00:57.610) 0:01:00.469 ********* 2025-07-05 22:57:51.413681 | orchestrator | changed: [testbed-manager] 2025-07-05 22:57:51.413695 | orchestrator | 2025-07-05 22:57:51.413708 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:57:51.413733 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:57:51.413748 | orchestrator | 2025-07-05 22:57:51.413761 | orchestrator | 2025-07-05 22:57:51.413775 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:57:51.413789 | orchestrator | Saturday 05 July 2025 22:56:44 +0000 (0:00:03.539) 0:01:04.009 ********* 2025-07-05 22:57:51.413801 | orchestrator | =============================================================================== 2025-07-05 22:57:51.413812 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 57.61s 2025-07-05 22:57:51.413824 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.54s 2025-07-05 22:57:51.413835 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.22s 2025-07-05 22:57:51.413846 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.78s 2025-07-05 22:57:51.413857 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.58s 2025-07-05 22:57:51.413869 | orchestrator | 2025-07-05 22:57:51.413880 | orchestrator | 2025-07-05 22:57:51.413892 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-07-05 22:57:51.413904 | orchestrator | 2025-07-05 22:57:51.413916 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-05 22:57:51.413928 | orchestrator | Saturday 05 July 2025 22:55:12 +0000 (0:00:00.233) 0:00:00.233 ********* 2025-07-05 22:57:51.413940 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:57:51.413953 | orchestrator | 2025-07-05 22:57:51.413965 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-07-05 22:57:51.413983 | orchestrator | Saturday 05 July 2025 22:55:13 +0000 (0:00:01.356) 0:00:01.590 ********* 2025-07-05 22:57:51.414001 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-05 22:57:51.414175 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-05 22:57:51.414203 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-05 22:57:51.414223 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-05 22:57:51.414240 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-05 22:57:51.414252 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-05 22:57:51.414263 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-05 22:57:51.414274 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-05 22:57:51.414285 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-05 22:57:51.414296 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-05 22:57:51.414307 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-05 22:57:51.414320 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-05 22:57:51.414397 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-05 22:57:51.414409 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-05 22:57:51.414420 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-05 22:57:51.414431 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-05 22:57:51.414464 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-05 22:57:51.414476 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-05 22:57:51.414487 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-05 22:57:51.414498 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-05 22:57:51.414509 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-05 22:57:51.414521 | orchestrator | 2025-07-05 22:57:51.414532 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-05 22:57:51.414543 | orchestrator | Saturday 05 July 2025 22:55:18 +0000 (0:00:04.612) 0:00:06.202 ********* 2025-07-05 22:57:51.414554 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 22:57:51.414567 | orchestrator | 2025-07-05 22:57:51.414578 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-07-05 22:57:51.414589 | orchestrator | Saturday 05 July 2025 22:55:19 +0000 (0:00:01.415) 0:00:07.617 ********* 2025-07-05 22:57:51.414613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.414630 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.414651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.414663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.414675 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.414687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.414709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414723 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.414781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414794 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414899 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.414931 | orchestrator | 2025-07-05 22:57:51.414941 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-07-05 22:57:51.414951 | orchestrator | Saturday 05 July 2025 22:55:24 +0000 (0:00:05.167) 0:00:12.785 ********* 2025-07-05 22:57:51.414973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.414985 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.414996 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415011 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:57:51.415022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415054 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:57:51.415064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415107 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:57:51.415122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415163 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:57:51.415199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415230 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:57:51.415241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415285 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:57:51.415300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415355 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:57:51.415365 | orchestrator | 2025-07-05 22:57:51.415376 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-07-05 22:57:51.415386 | orchestrator | Saturday 05 July 2025 22:55:26 +0000 (0:00:01.327) 0:00:14.115 ********* 2025-07-05 22:57:51.415449 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415462 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415479 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415497 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:57:51.415507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415571 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:57:51.415581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415634 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:57:51.415644 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:57:51.415658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415690 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:57:51.415700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415742 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:57:51.415752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-05 22:57:51.415763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.415787 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:57:51.415797 | orchestrator | 2025-07-05 22:57:51.415807 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-07-05 22:57:51.415817 | orchestrator | Saturday 05 July 2025 22:55:28 +0000 (0:00:02.643) 0:00:16.758 ********* 2025-07-05 22:57:51.415827 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:57:51.415837 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:57:51.415847 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:57:51.415857 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:57:51.415867 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:57:51.415877 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:57:51.415886 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:57:51.415896 | orchestrator | 2025-07-05 22:57:51.415906 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-07-05 22:57:51.415916 | orchestrator | Saturday 05 July 2025 22:55:29 +0000 (0:00:01.207) 0:00:17.965 ********* 2025-07-05 22:57:51.415926 | orchestrator | skipping: [testbed-manager] 2025-07-05 22:57:51.415935 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:57:51.415945 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:57:51.415955 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:57:51.415965 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:57:51.415975 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:57:51.415985 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:57:51.415995 | orchestrator | 2025-07-05 22:57:51.416004 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-07-05 22:57:51.416032 | orchestrator | Saturday 05 July 2025 22:55:31 +0000 (0:00:01.135) 0:00:19.100 ********* 2025-07-05 22:57:51.416042 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.416059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.416076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.416087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.416102 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.416123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416134 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.416150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416160 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.416177 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416188 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416213 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416293 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.416314 | orchestrator | 2025-07-05 22:57:51.416344 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-07-05 22:57:51.416363 | orchestrator | Saturday 05 July 2025 22:55:36 +0000 (0:00:05.581) 0:00:24.682 ********* 2025-07-05 22:57:51.416381 | orchestrator | [WARNING]: Skipped 2025-07-05 22:57:51.416398 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-07-05 22:57:51.416412 | orchestrator | to this access issue: 2025-07-05 22:57:51.416423 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-07-05 22:57:51.416433 | orchestrator | directory 2025-07-05 22:57:51.416443 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 22:57:51.416453 | orchestrator | 2025-07-05 22:57:51.416463 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-07-05 22:57:51.416473 | orchestrator | Saturday 05 July 2025 22:55:38 +0000 (0:00:01.454) 0:00:26.137 ********* 2025-07-05 22:57:51.416483 | orchestrator | [WARNING]: Skipped 2025-07-05 22:57:51.416492 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-07-05 22:57:51.416509 | orchestrator | to this access issue: 2025-07-05 22:57:51.416519 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-07-05 22:57:51.416529 | orchestrator | directory 2025-07-05 22:57:51.416539 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 22:57:51.416549 | orchestrator | 2025-07-05 22:57:51.416564 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-07-05 22:57:51.416581 | orchestrator | Saturday 05 July 2025 22:55:39 +0000 (0:00:00.957) 0:00:27.095 ********* 2025-07-05 22:57:51.416591 | orchestrator | [WARNING]: Skipped 2025-07-05 22:57:51.416601 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-07-05 22:57:51.416610 | orchestrator | to this access issue: 2025-07-05 22:57:51.416621 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-07-05 22:57:51.416630 | orchestrator | directory 2025-07-05 22:57:51.416640 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 22:57:51.416650 | orchestrator | 2025-07-05 22:57:51.416659 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-07-05 22:57:51.416669 | orchestrator | Saturday 05 July 2025 22:55:39 +0000 (0:00:00.795) 0:00:27.890 ********* 2025-07-05 22:57:51.416679 | orchestrator | [WARNING]: Skipped 2025-07-05 22:57:51.416689 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-07-05 22:57:51.416699 | orchestrator | to this access issue: 2025-07-05 22:57:51.416709 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-07-05 22:57:51.416719 | orchestrator | directory 2025-07-05 22:57:51.416729 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 22:57:51.416738 | orchestrator | 2025-07-05 22:57:51.416748 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-07-05 22:57:51.416763 | orchestrator | Saturday 05 July 2025 22:55:40 +0000 (0:00:00.767) 0:00:28.657 ********* 2025-07-05 22:57:51.416773 | orchestrator | changed: [testbed-manager] 2025-07-05 22:57:51.416783 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:57:51.416793 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:57:51.416803 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:57:51.416813 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:57:51.416823 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:57:51.416832 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:57:51.416842 | orchestrator | 2025-07-05 22:57:51.416852 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-07-05 22:57:51.416862 | orchestrator | Saturday 05 July 2025 22:55:44 +0000 (0:00:03.802) 0:00:32.460 ********* 2025-07-05 22:57:51.416872 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-05 22:57:51.416882 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-05 22:57:51.416892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-05 22:57:51.416917 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-05 22:57:51.416933 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-05 22:57:51.416948 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-05 22:57:51.416963 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-05 22:57:51.416978 | orchestrator | 2025-07-05 22:57:51.416993 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-07-05 22:57:51.417007 | orchestrator | Saturday 05 July 2025 22:55:47 +0000 (0:00:03.466) 0:00:35.927 ********* 2025-07-05 22:57:51.417021 | orchestrator | changed: [testbed-manager] 2025-07-05 22:57:51.417045 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:57:51.417059 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:57:51.417074 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:57:51.417088 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:57:51.417103 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:57:51.417118 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:57:51.417134 | orchestrator | 2025-07-05 22:57:51.417150 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-07-05 22:57:51.417166 | orchestrator | Saturday 05 July 2025 22:55:51 +0000 (0:00:03.428) 0:00:39.355 ********* 2025-07-05 22:57:51.417190 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417208 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.417223 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.417245 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.417270 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.417298 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.417317 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.417367 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.417377 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.417404 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.417421 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.417431 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.417457 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 22:57:51.417478 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.417488 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.417499 | orchestrator | 2025-07-05 22:57:51.417509 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-07-05 22:57:51.417519 | orchestrator | Saturday 05 July 2025 22:55:53 +0000 (0:00:02.454) 0:00:41.810 ********* 2025-07-05 22:57:51.417535 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-05 22:57:51.417545 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-05 22:57:51.417556 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-05 22:57:51.417574 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-05 22:57:51.417584 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-05 22:57:51.417594 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-05 22:57:51.417604 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-05 22:57:51.417614 | orchestrator | 2025-07-05 22:57:51.417624 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-07-05 22:57:51.417634 | orchestrator | Saturday 05 July 2025 22:55:56 +0000 (0:00:02.298) 0:00:44.108 ********* 2025-07-05 22:57:51.417644 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-05 22:57:51.417654 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-05 22:57:51.417664 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-05 22:57:51.417674 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-05 22:57:51.417684 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-05 22:57:51.417693 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-05 22:57:51.417703 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-05 22:57:51.417713 | orchestrator | 2025-07-05 22:57:51.417723 | orchestrator | TASK [common : Check common containers] **************************************** 2025-07-05 22:57:51.417737 | orchestrator | Saturday 05 July 2025 22:55:58 +0000 (0:00:02.812) 0:00:46.920 ********* 2025-07-05 22:57:51.417747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417757 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.417954 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.417970 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.417996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418007 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-05 22:57:51.418053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418074 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418103 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418129 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 22:57:51.418196 | orchestrator | 2025-07-05 22:57:51.418211 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-07-05 22:57:51.418221 | orchestrator | Saturday 05 July 2025 22:56:02 +0000 (0:00:03.230) 0:00:50.151 ********* 2025-07-05 22:57:51.418231 | orchestrator | changed: [testbed-manager] 2025-07-05 22:57:51.418241 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:57:51.418251 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:57:51.418261 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:57:51.418271 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:57:51.418280 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:57:51.418290 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:57:51.418300 | orchestrator | 2025-07-05 22:57:51.418310 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-07-05 22:57:51.418320 | orchestrator | Saturday 05 July 2025 22:56:04 +0000 (0:00:02.248) 0:00:52.400 ********* 2025-07-05 22:57:51.418399 | orchestrator | changed: [testbed-manager] 2025-07-05 22:57:51.418410 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:57:51.418420 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:57:51.418428 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:57:51.418436 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:57:51.418444 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:57:51.418451 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:57:51.418459 | orchestrator | 2025-07-05 22:57:51.418468 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-05 22:57:51.418476 | orchestrator | Saturday 05 July 2025 22:56:06 +0000 (0:00:01.654) 0:00:54.054 ********* 2025-07-05 22:57:51.418484 | orchestrator | 2025-07-05 22:57:51.418492 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-05 22:57:51.418500 | orchestrator | Saturday 05 July 2025 22:56:06 +0000 (0:00:00.274) 0:00:54.328 ********* 2025-07-05 22:57:51.418508 | orchestrator | 2025-07-05 22:57:51.418516 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-05 22:57:51.418524 | orchestrator | Saturday 05 July 2025 22:56:06 +0000 (0:00:00.064) 0:00:54.393 ********* 2025-07-05 22:57:51.418532 | orchestrator | 2025-07-05 22:57:51.418544 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-05 22:57:51.418552 | orchestrator | Saturday 05 July 2025 22:56:06 +0000 (0:00:00.088) 0:00:54.481 ********* 2025-07-05 22:57:51.418560 | orchestrator | 2025-07-05 22:57:51.418568 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-05 22:57:51.418576 | orchestrator | Saturday 05 July 2025 22:56:06 +0000 (0:00:00.071) 0:00:54.553 ********* 2025-07-05 22:57:51.418592 | orchestrator | 2025-07-05 22:57:51.418601 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-05 22:57:51.418610 | orchestrator | Saturday 05 July 2025 22:56:06 +0000 (0:00:00.123) 0:00:54.676 ********* 2025-07-05 22:57:51.418619 | orchestrator | 2025-07-05 22:57:51.418628 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-05 22:57:51.418637 | orchestrator | Saturday 05 July 2025 22:56:06 +0000 (0:00:00.097) 0:00:54.773 ********* 2025-07-05 22:57:51.418646 | orchestrator | 2025-07-05 22:57:51.418656 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-07-05 22:57:51.418665 | orchestrator | Saturday 05 July 2025 22:56:06 +0000 (0:00:00.101) 0:00:54.875 ********* 2025-07-05 22:57:51.418674 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:57:51.418683 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:57:51.418692 | orchestrator | changed: [testbed-manager] 2025-07-05 22:57:51.418701 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:57:51.418710 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:57:51.418718 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:57:51.418727 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:57:51.418736 | orchestrator | 2025-07-05 22:57:51.418745 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-07-05 22:57:51.418754 | orchestrator | Saturday 05 July 2025 22:56:52 +0000 (0:00:45.826) 0:01:40.701 ********* 2025-07-05 22:57:51.418763 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:57:51.418772 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:57:51.418781 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:57:51.418790 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:57:51.418799 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:57:51.418808 | orchestrator | changed: [testbed-manager] 2025-07-05 22:57:51.418817 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:57:51.418826 | orchestrator | 2025-07-05 22:57:51.418967 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-07-05 22:57:51.418978 | orchestrator | Saturday 05 July 2025 22:57:36 +0000 (0:00:44.236) 0:02:24.938 ********* 2025-07-05 22:57:51.418987 | orchestrator | ok: [testbed-manager] 2025-07-05 22:57:51.418995 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:57:51.419003 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:57:51.419011 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:57:51.419019 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:57:51.419027 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:57:51.419035 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:57:51.419043 | orchestrator | 2025-07-05 22:57:51.419051 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-07-05 22:57:51.419059 | orchestrator | Saturday 05 July 2025 22:57:38 +0000 (0:00:02.066) 0:02:27.005 ********* 2025-07-05 22:57:51.419067 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:57:51.419075 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:57:51.419083 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:57:51.419091 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:57:51.419099 | orchestrator | changed: [testbed-manager] 2025-07-05 22:57:51.419107 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:57:51.419115 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:57:51.419123 | orchestrator | 2025-07-05 22:57:51.419131 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:57:51.419141 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-05 22:57:51.419150 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-05 22:57:51.419164 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-05 22:57:51.419172 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-05 22:57:51.419188 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-05 22:57:51.419196 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-05 22:57:51.419204 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-05 22:57:51.419212 | orchestrator | 2025-07-05 22:57:51.419220 | orchestrator | 2025-07-05 22:57:51.419229 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:57:51.419237 | orchestrator | Saturday 05 July 2025 22:57:48 +0000 (0:00:09.210) 0:02:36.215 ********* 2025-07-05 22:57:51.419245 | orchestrator | =============================================================================== 2025-07-05 22:57:51.419253 | orchestrator | common : Restart fluentd container ------------------------------------- 45.83s 2025-07-05 22:57:51.419261 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 44.24s 2025-07-05 22:57:51.419270 | orchestrator | common : Restart cron container ----------------------------------------- 9.21s 2025-07-05 22:57:51.419278 | orchestrator | common : Copying over config.json files for services -------------------- 5.58s 2025-07-05 22:57:51.419290 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.17s 2025-07-05 22:57:51.419298 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.61s 2025-07-05 22:57:51.419306 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.80s 2025-07-05 22:57:51.419314 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.47s 2025-07-05 22:57:51.419341 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.43s 2025-07-05 22:57:51.419352 | orchestrator | common : Check common containers ---------------------------------------- 3.23s 2025-07-05 22:57:51.419360 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.81s 2025-07-05 22:57:51.419368 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.64s 2025-07-05 22:57:51.419376 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.45s 2025-07-05 22:57:51.419384 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.30s 2025-07-05 22:57:51.419392 | orchestrator | common : Creating log volume -------------------------------------------- 2.25s 2025-07-05 22:57:51.419400 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.07s 2025-07-05 22:57:51.419408 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.65s 2025-07-05 22:57:51.419415 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.45s 2025-07-05 22:57:51.419423 | orchestrator | common : include_tasks -------------------------------------------------- 1.42s 2025-07-05 22:57:51.419431 | orchestrator | common : include_tasks -------------------------------------------------- 1.36s 2025-07-05 22:57:51.419439 | orchestrator | 2025-07-05 22:57:51 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:57:51.419448 | orchestrator | 2025-07-05 22:57:51 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:51.419456 | orchestrator | 2025-07-05 22:57:51 | INFO  | Task 95dd883b-1c19-4f5f-b3b0-51fc9309ccfd is in state STARTED 2025-07-05 22:57:51.419956 | orchestrator | 2025-07-05 22:57:51 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:51.422316 | orchestrator | 2025-07-05 22:57:51 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:57:51.422835 | orchestrator | 2025-07-05 22:57:51 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:57:51.423017 | orchestrator | 2025-07-05 22:57:51 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:54.461004 | orchestrator | 2025-07-05 22:57:54 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:57:54.461761 | orchestrator | 2025-07-05 22:57:54 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:54.462778 | orchestrator | 2025-07-05 22:57:54 | INFO  | Task 95dd883b-1c19-4f5f-b3b0-51fc9309ccfd is in state STARTED 2025-07-05 22:57:54.463388 | orchestrator | 2025-07-05 22:57:54 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:54.464052 | orchestrator | 2025-07-05 22:57:54 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:57:54.464985 | orchestrator | 2025-07-05 22:57:54 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:57:54.465034 | orchestrator | 2025-07-05 22:57:54 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:57:57.494661 | orchestrator | 2025-07-05 22:57:57 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:57:57.494780 | orchestrator | 2025-07-05 22:57:57 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:57:57.495431 | orchestrator | 2025-07-05 22:57:57 | INFO  | Task 95dd883b-1c19-4f5f-b3b0-51fc9309ccfd is in state STARTED 2025-07-05 22:57:57.495970 | orchestrator | 2025-07-05 22:57:57 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:57:57.499564 | orchestrator | 2025-07-05 22:57:57 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:57:57.499936 | orchestrator | 2025-07-05 22:57:57 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:57:57.500114 | orchestrator | 2025-07-05 22:57:57 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:00.526792 | orchestrator | 2025-07-05 22:58:00 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:00.526920 | orchestrator | 2025-07-05 22:58:00 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:00.527253 | orchestrator | 2025-07-05 22:58:00 | INFO  | Task 95dd883b-1c19-4f5f-b3b0-51fc9309ccfd is in state STARTED 2025-07-05 22:58:00.527774 | orchestrator | 2025-07-05 22:58:00 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:00.528815 | orchestrator | 2025-07-05 22:58:00 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:00.529494 | orchestrator | 2025-07-05 22:58:00 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:58:00.529518 | orchestrator | 2025-07-05 22:58:00 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:03.550514 | orchestrator | 2025-07-05 22:58:03 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:03.550625 | orchestrator | 2025-07-05 22:58:03 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:03.552751 | orchestrator | 2025-07-05 22:58:03 | INFO  | Task 95dd883b-1c19-4f5f-b3b0-51fc9309ccfd is in state STARTED 2025-07-05 22:58:03.554895 | orchestrator | 2025-07-05 22:58:03 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:03.556584 | orchestrator | 2025-07-05 22:58:03 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:03.558386 | orchestrator | 2025-07-05 22:58:03 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:58:03.558783 | orchestrator | 2025-07-05 22:58:03 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:06.595788 | orchestrator | 2025-07-05 22:58:06 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:06.596006 | orchestrator | 2025-07-05 22:58:06 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:06.596915 | orchestrator | 2025-07-05 22:58:06 | INFO  | Task 95dd883b-1c19-4f5f-b3b0-51fc9309ccfd is in state STARTED 2025-07-05 22:58:06.602586 | orchestrator | 2025-07-05 22:58:06 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:06.602626 | orchestrator | 2025-07-05 22:58:06 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:06.603114 | orchestrator | 2025-07-05 22:58:06 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:58:06.603136 | orchestrator | 2025-07-05 22:58:06 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:09.631641 | orchestrator | 2025-07-05 22:58:09 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:09.631870 | orchestrator | 2025-07-05 22:58:09 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:09.632634 | orchestrator | 2025-07-05 22:58:09 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:09.632887 | orchestrator | 2025-07-05 22:58:09 | INFO  | Task 95dd883b-1c19-4f5f-b3b0-51fc9309ccfd is in state SUCCESS 2025-07-05 22:58:09.634206 | orchestrator | 2025-07-05 22:58:09 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:09.634869 | orchestrator | 2025-07-05 22:58:09 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:09.636318 | orchestrator | 2025-07-05 22:58:09 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:58:09.636386 | orchestrator | 2025-07-05 22:58:09 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:12.679867 | orchestrator | 2025-07-05 22:58:12 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:12.680851 | orchestrator | 2025-07-05 22:58:12 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:12.681034 | orchestrator | 2025-07-05 22:58:12 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:12.683658 | orchestrator | 2025-07-05 22:58:12 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:12.684329 | orchestrator | 2025-07-05 22:58:12 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:12.685255 | orchestrator | 2025-07-05 22:58:12 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:58:12.685287 | orchestrator | 2025-07-05 22:58:12 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:15.718961 | orchestrator | 2025-07-05 22:58:15 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:15.719278 | orchestrator | 2025-07-05 22:58:15 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:15.720569 | orchestrator | 2025-07-05 22:58:15 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:15.721520 | orchestrator | 2025-07-05 22:58:15 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:15.723507 | orchestrator | 2025-07-05 22:58:15 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:15.725493 | orchestrator | 2025-07-05 22:58:15 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:58:15.725523 | orchestrator | 2025-07-05 22:58:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:18.766989 | orchestrator | 2025-07-05 22:58:18 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:18.769158 | orchestrator | 2025-07-05 22:58:18 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:18.771331 | orchestrator | 2025-07-05 22:58:18 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:18.772909 | orchestrator | 2025-07-05 22:58:18 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:18.774176 | orchestrator | 2025-07-05 22:58:18 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:18.776109 | orchestrator | 2025-07-05 22:58:18 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:58:18.776137 | orchestrator | 2025-07-05 22:58:18 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:21.803634 | orchestrator | 2025-07-05 22:58:21 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:21.804491 | orchestrator | 2025-07-05 22:58:21 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:21.805809 | orchestrator | 2025-07-05 22:58:21 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:21.806909 | orchestrator | 2025-07-05 22:58:21 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:21.807923 | orchestrator | 2025-07-05 22:58:21 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:21.809296 | orchestrator | 2025-07-05 22:58:21 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state STARTED 2025-07-05 22:58:21.809338 | orchestrator | 2025-07-05 22:58:21 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:24.835804 | orchestrator | 2025-07-05 22:58:24 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:24.836062 | orchestrator | 2025-07-05 22:58:24 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:24.840747 | orchestrator | 2025-07-05 22:58:24 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:24.841333 | orchestrator | 2025-07-05 22:58:24 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:24.842280 | orchestrator | 2025-07-05 22:58:24 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:24.843329 | orchestrator | 2025-07-05 22:58:24 | INFO  | Task 07e59369-1fc6-45c1-88f1-3361a089b23b is in state SUCCESS 2025-07-05 22:58:24.844736 | orchestrator | 2025-07-05 22:58:24.844794 | orchestrator | 2025-07-05 22:58:24.844816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 22:58:24.844830 | orchestrator | 2025-07-05 22:58:24.844842 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 22:58:24.844854 | orchestrator | Saturday 05 July 2025 22:57:53 +0000 (0:00:00.405) 0:00:00.405 ********* 2025-07-05 22:58:24.844865 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:58:24.844877 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:58:24.844889 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:58:24.844901 | orchestrator | 2025-07-05 22:58:24.844913 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 22:58:24.844924 | orchestrator | Saturday 05 July 2025 22:57:54 +0000 (0:00:00.339) 0:00:00.745 ********* 2025-07-05 22:58:24.844964 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-07-05 22:58:24.844976 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-07-05 22:58:24.844987 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-07-05 22:58:24.844998 | orchestrator | 2025-07-05 22:58:24.845009 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-07-05 22:58:24.845020 | orchestrator | 2025-07-05 22:58:24.845031 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-07-05 22:58:24.845042 | orchestrator | Saturday 05 July 2025 22:57:54 +0000 (0:00:00.594) 0:00:01.339 ********* 2025-07-05 22:58:24.845053 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 22:58:24.845066 | orchestrator | 2025-07-05 22:58:24.845077 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-07-05 22:58:24.845088 | orchestrator | Saturday 05 July 2025 22:57:55 +0000 (0:00:00.821) 0:00:02.161 ********* 2025-07-05 22:58:24.845114 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-05 22:58:24.845126 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-05 22:58:24.845137 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-05 22:58:24.845148 | orchestrator | 2025-07-05 22:58:24.845159 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-07-05 22:58:24.845170 | orchestrator | Saturday 05 July 2025 22:57:56 +0000 (0:00:00.862) 0:00:03.023 ********* 2025-07-05 22:58:24.845181 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-05 22:58:24.845192 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-05 22:58:24.845203 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-05 22:58:24.845214 | orchestrator | 2025-07-05 22:58:24.845225 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-07-05 22:58:24.845237 | orchestrator | Saturday 05 July 2025 22:57:58 +0000 (0:00:02.107) 0:00:05.131 ********* 2025-07-05 22:58:24.845248 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:58:24.845259 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:58:24.845270 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:58:24.845281 | orchestrator | 2025-07-05 22:58:24.845292 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-07-05 22:58:24.845305 | orchestrator | Saturday 05 July 2025 22:58:00 +0000 (0:00:02.009) 0:00:07.140 ********* 2025-07-05 22:58:24.845318 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:58:24.845330 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:58:24.845342 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:58:24.845354 | orchestrator | 2025-07-05 22:58:24.845398 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:58:24.845411 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:58:24.845425 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:58:24.845437 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:58:24.845450 | orchestrator | 2025-07-05 22:58:24.845462 | orchestrator | 2025-07-05 22:58:24.845475 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:58:24.845487 | orchestrator | Saturday 05 July 2025 22:58:07 +0000 (0:00:06.747) 0:00:13.888 ********* 2025-07-05 22:58:24.845499 | orchestrator | =============================================================================== 2025-07-05 22:58:24.845512 | orchestrator | memcached : Restart memcached container --------------------------------- 6.75s 2025-07-05 22:58:24.845524 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.11s 2025-07-05 22:58:24.845536 | orchestrator | memcached : Check memcached container ----------------------------------- 2.01s 2025-07-05 22:58:24.845557 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.86s 2025-07-05 22:58:24.845569 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.82s 2025-07-05 22:58:24.845581 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-07-05 22:58:24.845594 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-07-05 22:58:24.845606 | orchestrator | 2025-07-05 22:58:24.845618 | orchestrator | 2025-07-05 22:58:24.845630 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 22:58:24.845642 | orchestrator | 2025-07-05 22:58:24.845654 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 22:58:24.845665 | orchestrator | Saturday 05 July 2025 22:57:54 +0000 (0:00:00.581) 0:00:00.581 ********* 2025-07-05 22:58:24.845676 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:58:24.845687 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:58:24.845698 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:58:24.845709 | orchestrator | 2025-07-05 22:58:24.845721 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 22:58:24.845746 | orchestrator | Saturday 05 July 2025 22:57:55 +0000 (0:00:00.485) 0:00:01.066 ********* 2025-07-05 22:58:24.845758 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-07-05 22:58:24.845769 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-07-05 22:58:24.845780 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-07-05 22:58:24.845791 | orchestrator | 2025-07-05 22:58:24.845802 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-07-05 22:58:24.845813 | orchestrator | 2025-07-05 22:58:24.845829 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-07-05 22:58:24.845847 | orchestrator | Saturday 05 July 2025 22:57:56 +0000 (0:00:00.877) 0:00:01.944 ********* 2025-07-05 22:58:24.845859 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 22:58:24.845870 | orchestrator | 2025-07-05 22:58:24.845881 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-07-05 22:58:24.845892 | orchestrator | Saturday 05 July 2025 22:57:56 +0000 (0:00:00.868) 0:00:02.812 ********* 2025-07-05 22:58:24.845912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.845930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.845942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.845962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.845975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.845995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846007 | orchestrator | 2025-07-05 22:58:24.846068 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-07-05 22:58:24.846084 | orchestrator | Saturday 05 July 2025 22:57:58 +0000 (0:00:01.404) 0:00:04.217 ********* 2025-07-05 22:58:24.846096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846190 | orchestrator | 2025-07-05 22:58:24.846201 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-07-05 22:58:24.846212 | orchestrator | Saturday 05 July 2025 22:58:01 +0000 (0:00:02.709) 0:00:06.926 ********* 2025-07-05 22:58:24.846224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846307 | orchestrator | 2025-07-05 22:58:24.846324 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-07-05 22:58:24.846336 | orchestrator | Saturday 05 July 2025 22:58:03 +0000 (0:00:02.199) 0:00:09.125 ********* 2025-07-05 22:58:24.846347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-05 22:58:24.846447 | orchestrator | 2025-07-05 22:58:24.846458 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-05 22:58:24.846469 | orchestrator | Saturday 05 July 2025 22:58:04 +0000 (0:00:01.324) 0:00:10.449 ********* 2025-07-05 22:58:24.846481 | orchestrator | 2025-07-05 22:58:24.846492 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-05 22:58:24.846508 | orchestrator | Saturday 05 July 2025 22:58:04 +0000 (0:00:00.067) 0:00:10.517 ********* 2025-07-05 22:58:24.846520 | orchestrator | 2025-07-05 22:58:24.846531 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-05 22:58:24.846542 | orchestrator | Saturday 05 July 2025 22:58:04 +0000 (0:00:00.071) 0:00:10.588 ********* 2025-07-05 22:58:24.846553 | orchestrator | 2025-07-05 22:58:24.846564 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-07-05 22:58:24.846575 | orchestrator | Saturday 05 July 2025 22:58:04 +0000 (0:00:00.063) 0:00:10.652 ********* 2025-07-05 22:58:24.846587 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:58:24.846598 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:58:24.846609 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:58:24.846620 | orchestrator | 2025-07-05 22:58:24.846631 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-07-05 22:58:24.846643 | orchestrator | Saturday 05 July 2025 22:58:12 +0000 (0:00:07.787) 0:00:18.439 ********* 2025-07-05 22:58:24.846654 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:58:24.846665 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:58:24.846676 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:58:24.846687 | orchestrator | 2025-07-05 22:58:24.846698 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:58:24.846715 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:58:24.846727 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:58:24.846743 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 22:58:24.846754 | orchestrator | 2025-07-05 22:58:24.846765 | orchestrator | 2025-07-05 22:58:24.846777 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:58:24.846788 | orchestrator | Saturday 05 July 2025 22:58:22 +0000 (0:00:10.202) 0:00:28.642 ********* 2025-07-05 22:58:24.846799 | orchestrator | =============================================================================== 2025-07-05 22:58:24.846810 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.20s 2025-07-05 22:58:24.846821 | orchestrator | redis : Restart redis container ----------------------------------------- 7.79s 2025-07-05 22:58:24.846832 | orchestrator | redis : Copying over default config.json files -------------------------- 2.71s 2025-07-05 22:58:24.846844 | orchestrator | redis : Copying over redis config files --------------------------------- 2.20s 2025-07-05 22:58:24.846854 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.40s 2025-07-05 22:58:24.846866 | orchestrator | redis : Check redis containers ------------------------------------------ 1.32s 2025-07-05 22:58:24.846877 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2025-07-05 22:58:24.846888 | orchestrator | redis : include_tasks --------------------------------------------------- 0.87s 2025-07-05 22:58:24.846899 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-07-05 22:58:24.846910 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2025-07-05 22:58:24.846921 | orchestrator | 2025-07-05 22:58:24 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:27.883986 | orchestrator | 2025-07-05 22:58:27 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:27.885576 | orchestrator | 2025-07-05 22:58:27 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:27.886283 | orchestrator | 2025-07-05 22:58:27 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:27.889852 | orchestrator | 2025-07-05 22:58:27 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:27.890719 | orchestrator | 2025-07-05 22:58:27 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:27.890748 | orchestrator | 2025-07-05 22:58:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:30.929659 | orchestrator | 2025-07-05 22:58:30 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:30.934542 | orchestrator | 2025-07-05 22:58:30 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:30.937870 | orchestrator | 2025-07-05 22:58:30 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:30.940802 | orchestrator | 2025-07-05 22:58:30 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:30.944935 | orchestrator | 2025-07-05 22:58:30 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:30.944982 | orchestrator | 2025-07-05 22:58:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:33.971177 | orchestrator | 2025-07-05 22:58:33 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:33.971261 | orchestrator | 2025-07-05 22:58:33 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:33.972118 | orchestrator | 2025-07-05 22:58:33 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:33.972707 | orchestrator | 2025-07-05 22:58:33 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:33.973403 | orchestrator | 2025-07-05 22:58:33 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:33.973494 | orchestrator | 2025-07-05 22:58:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:37.028076 | orchestrator | 2025-07-05 22:58:37 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:37.033207 | orchestrator | 2025-07-05 22:58:37 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:37.033262 | orchestrator | 2025-07-05 22:58:37 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:37.033501 | orchestrator | 2025-07-05 22:58:37 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:37.035820 | orchestrator | 2025-07-05 22:58:37 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:37.035845 | orchestrator | 2025-07-05 22:58:37 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:40.076653 | orchestrator | 2025-07-05 22:58:40 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:40.076894 | orchestrator | 2025-07-05 22:58:40 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:40.076931 | orchestrator | 2025-07-05 22:58:40 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:40.077711 | orchestrator | 2025-07-05 22:58:40 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:40.079305 | orchestrator | 2025-07-05 22:58:40 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:40.079338 | orchestrator | 2025-07-05 22:58:40 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:43.121025 | orchestrator | 2025-07-05 22:58:43 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:43.121421 | orchestrator | 2025-07-05 22:58:43 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:43.122495 | orchestrator | 2025-07-05 22:58:43 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:43.124673 | orchestrator | 2025-07-05 22:58:43 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:43.124698 | orchestrator | 2025-07-05 22:58:43 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:43.124710 | orchestrator | 2025-07-05 22:58:43 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:46.159007 | orchestrator | 2025-07-05 22:58:46 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:46.159248 | orchestrator | 2025-07-05 22:58:46 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:46.160236 | orchestrator | 2025-07-05 22:58:46 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:46.161087 | orchestrator | 2025-07-05 22:58:46 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:46.162142 | orchestrator | 2025-07-05 22:58:46 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:46.162184 | orchestrator | 2025-07-05 22:58:46 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:49.194236 | orchestrator | 2025-07-05 22:58:49 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:49.194786 | orchestrator | 2025-07-05 22:58:49 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:49.197138 | orchestrator | 2025-07-05 22:58:49 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:49.197796 | orchestrator | 2025-07-05 22:58:49 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:49.198808 | orchestrator | 2025-07-05 22:58:49 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:49.200059 | orchestrator | 2025-07-05 22:58:49 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:52.253417 | orchestrator | 2025-07-05 22:58:52 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:52.255443 | orchestrator | 2025-07-05 22:58:52 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:52.256533 | orchestrator | 2025-07-05 22:58:52 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:52.257506 | orchestrator | 2025-07-05 22:58:52 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:52.259511 | orchestrator | 2025-07-05 22:58:52 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:52.259693 | orchestrator | 2025-07-05 22:58:52 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:55.292174 | orchestrator | 2025-07-05 22:58:55 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:55.293129 | orchestrator | 2025-07-05 22:58:55 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:55.294280 | orchestrator | 2025-07-05 22:58:55 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:55.295479 | orchestrator | 2025-07-05 22:58:55 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:55.296871 | orchestrator | 2025-07-05 22:58:55 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:55.296917 | orchestrator | 2025-07-05 22:58:55 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:58:58.338721 | orchestrator | 2025-07-05 22:58:58 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state STARTED 2025-07-05 22:58:58.338821 | orchestrator | 2025-07-05 22:58:58 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:58:58.341012 | orchestrator | 2025-07-05 22:58:58 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:58:58.342303 | orchestrator | 2025-07-05 22:58:58 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:58:58.343464 | orchestrator | 2025-07-05 22:58:58 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:58:58.343664 | orchestrator | 2025-07-05 22:58:58 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:01.384318 | orchestrator | 2025-07-05 22:59:01 | INFO  | Task cea90366-b8bd-4bb7-92f0-b245be4bc341 is in state SUCCESS 2025-07-05 22:59:01.387673 | orchestrator | 2025-07-05 22:59:01.387753 | orchestrator | 2025-07-05 22:59:01.387768 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 22:59:01.387781 | orchestrator | 2025-07-05 22:59:01.387793 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 22:59:01.387805 | orchestrator | Saturday 05 July 2025 22:57:54 +0000 (0:00:00.398) 0:00:00.399 ********* 2025-07-05 22:59:01.387842 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:59:01.387854 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:59:01.387866 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:59:01.387876 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:59:01.387887 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:59:01.387898 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:59:01.387909 | orchestrator | 2025-07-05 22:59:01.387921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 22:59:01.387932 | orchestrator | Saturday 05 July 2025 22:57:55 +0000 (0:00:01.325) 0:00:01.724 ********* 2025-07-05 22:59:01.387943 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-05 22:59:01.387954 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-05 22:59:01.387965 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-05 22:59:01.387976 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-05 22:59:01.387987 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-05 22:59:01.387998 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-05 22:59:01.388009 | orchestrator | 2025-07-05 22:59:01.388020 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-07-05 22:59:01.388031 | orchestrator | 2025-07-05 22:59:01.388042 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-07-05 22:59:01.388053 | orchestrator | Saturday 05 July 2025 22:57:56 +0000 (0:00:01.131) 0:00:02.855 ********* 2025-07-05 22:59:01.388065 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 22:59:01.388078 | orchestrator | 2025-07-05 22:59:01.388089 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-05 22:59:01.388100 | orchestrator | Saturday 05 July 2025 22:57:58 +0000 (0:00:01.465) 0:00:04.320 ********* 2025-07-05 22:59:01.388112 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-05 22:59:01.388123 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-05 22:59:01.388134 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-05 22:59:01.388163 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-05 22:59:01.388175 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-05 22:59:01.388187 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-05 22:59:01.388198 | orchestrator | 2025-07-05 22:59:01.388212 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-05 22:59:01.388225 | orchestrator | Saturday 05 July 2025 22:57:59 +0000 (0:00:01.487) 0:00:05.808 ********* 2025-07-05 22:59:01.388238 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-05 22:59:01.388251 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-05 22:59:01.388263 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-05 22:59:01.388276 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-05 22:59:01.388288 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-05 22:59:01.388300 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-05 22:59:01.388318 | orchestrator | 2025-07-05 22:59:01.388332 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-05 22:59:01.388344 | orchestrator | Saturday 05 July 2025 22:58:00 +0000 (0:00:01.379) 0:00:07.187 ********* 2025-07-05 22:59:01.388355 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-07-05 22:59:01.388366 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:01.388378 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-07-05 22:59:01.388389 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:01.388434 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-07-05 22:59:01.388454 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:01.388465 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-07-05 22:59:01.388477 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:01.388487 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-07-05 22:59:01.388498 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:01.388510 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-07-05 22:59:01.388527 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:01.388538 | orchestrator | 2025-07-05 22:59:01.388549 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-07-05 22:59:01.388561 | orchestrator | Saturday 05 July 2025 22:58:02 +0000 (0:00:01.214) 0:00:08.402 ********* 2025-07-05 22:59:01.388572 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:01.388583 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:01.388595 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:01.388605 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:01.388616 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:01.388627 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:01.388639 | orchestrator | 2025-07-05 22:59:01.388650 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-07-05 22:59:01.388661 | orchestrator | Saturday 05 July 2025 22:58:02 +0000 (0:00:00.636) 0:00:09.038 ********* 2025-07-05 22:59:01.388694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388794 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.388886 | orchestrator | 2025-07-05 22:59:01.388898 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-07-05 22:59:01.388910 | orchestrator | Saturday 05 July 2025 22:58:04 +0000 (0:00:01.390) 0:00:10.428 ********* 2025-07-05 22:59:01.388921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389085 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389271 | orchestrator | 2025-07-05 22:59:01.389282 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-07-05 22:59:01.389294 | orchestrator | Saturday 05 July 2025 22:58:06 +0000 (0:00:02.779) 0:00:13.208 ********* 2025-07-05 22:59:01.389305 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:01.389317 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:01.389328 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:01.389339 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:01.389350 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:01.389361 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:01.389372 | orchestrator | 2025-07-05 22:59:01.389383 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-07-05 22:59:01.389416 | orchestrator | Saturday 05 July 2025 22:58:07 +0000 (0:00:00.690) 0:00:13.899 ********* 2025-07-05 22:59:01.389428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389495 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389555 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-05 22:59:01.389766 | orchestrator | 2025-07-05 22:59:01.389778 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-05 22:59:01.389790 | orchestrator | Saturday 05 July 2025 22:58:10 +0000 (0:00:02.521) 0:00:16.421 ********* 2025-07-05 22:59:01.389801 | orchestrator | 2025-07-05 22:59:01.389813 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-05 22:59:01.389824 | orchestrator | Saturday 05 July 2025 22:58:10 +0000 (0:00:00.233) 0:00:16.654 ********* 2025-07-05 22:59:01.389835 | orchestrator | 2025-07-05 22:59:01.389846 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-05 22:59:01.389857 | orchestrator | Saturday 05 July 2025 22:58:10 +0000 (0:00:00.144) 0:00:16.798 ********* 2025-07-05 22:59:01.389868 | orchestrator | 2025-07-05 22:59:01.389879 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-05 22:59:01.389891 | orchestrator | Saturday 05 July 2025 22:58:10 +0000 (0:00:00.220) 0:00:17.018 ********* 2025-07-05 22:59:01.389902 | orchestrator | 2025-07-05 22:59:01.389913 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-05 22:59:01.389924 | orchestrator | Saturday 05 July 2025 22:58:10 +0000 (0:00:00.143) 0:00:17.161 ********* 2025-07-05 22:59:01.389935 | orchestrator | 2025-07-05 22:59:01.389946 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-05 22:59:01.389957 | orchestrator | Saturday 05 July 2025 22:58:10 +0000 (0:00:00.133) 0:00:17.295 ********* 2025-07-05 22:59:01.389968 | orchestrator | 2025-07-05 22:59:01.389979 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-07-05 22:59:01.389991 | orchestrator | Saturday 05 July 2025 22:58:11 +0000 (0:00:00.628) 0:00:17.924 ********* 2025-07-05 22:59:01.390002 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:59:01.390013 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:59:01.390076 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:59:01.390088 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:01.390100 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:59:01.390111 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:59:01.390123 | orchestrator | 2025-07-05 22:59:01.390134 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-07-05 22:59:01.390145 | orchestrator | Saturday 05 July 2025 22:58:23 +0000 (0:00:11.786) 0:00:29.711 ********* 2025-07-05 22:59:01.390157 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:59:01.390168 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:59:01.390179 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:59:01.390194 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:59:01.390212 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:59:01.390238 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:59:01.390258 | orchestrator | 2025-07-05 22:59:01.390274 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-05 22:59:01.390291 | orchestrator | Saturday 05 July 2025 22:58:25 +0000 (0:00:02.155) 0:00:31.867 ********* 2025-07-05 22:59:01.390307 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:59:01.390324 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:59:01.390340 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:01.390358 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:59:01.390376 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:59:01.390433 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:59:01.390456 | orchestrator | 2025-07-05 22:59:01.390480 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-07-05 22:59:01.390494 | orchestrator | Saturday 05 July 2025 22:58:34 +0000 (0:00:09.445) 0:00:41.312 ********* 2025-07-05 22:59:01.390507 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-07-05 22:59:01.390532 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-07-05 22:59:01.390546 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-07-05 22:59:01.390558 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-07-05 22:59:01.390571 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-07-05 22:59:01.390597 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-07-05 22:59:01.390610 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-07-05 22:59:01.390623 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-07-05 22:59:01.390636 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-07-05 22:59:01.390649 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-07-05 22:59:01.390662 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-07-05 22:59:01.390674 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-07-05 22:59:01.390687 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-05 22:59:01.390701 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-05 22:59:01.390713 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-05 22:59:01.390724 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-05 22:59:01.390735 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-05 22:59:01.390746 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-05 22:59:01.390757 | orchestrator | 2025-07-05 22:59:01.390768 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-07-05 22:59:01.390779 | orchestrator | Saturday 05 July 2025 22:58:42 +0000 (0:00:07.806) 0:00:49.119 ********* 2025-07-05 22:59:01.390791 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-07-05 22:59:01.390802 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:01.390813 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-07-05 22:59:01.390824 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:01.390835 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-07-05 22:59:01.390847 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:01.390858 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-07-05 22:59:01.390869 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-07-05 22:59:01.391057 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-07-05 22:59:01.391071 | orchestrator | 2025-07-05 22:59:01.391082 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-07-05 22:59:01.391093 | orchestrator | Saturday 05 July 2025 22:58:45 +0000 (0:00:02.608) 0:00:51.727 ********* 2025-07-05 22:59:01.391105 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-07-05 22:59:01.391116 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:01.391128 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-07-05 22:59:01.391139 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:01.391150 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-07-05 22:59:01.391171 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:01.391183 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-07-05 22:59:01.391194 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-07-05 22:59:01.391205 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-07-05 22:59:01.391216 | orchestrator | 2025-07-05 22:59:01.391228 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-05 22:59:01.391239 | orchestrator | Saturday 05 July 2025 22:58:49 +0000 (0:00:04.059) 0:00:55.786 ********* 2025-07-05 22:59:01.391250 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:59:01.391262 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:59:01.391273 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:59:01.391284 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:59:01.391295 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:01.391306 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:59:01.391317 | orchestrator | 2025-07-05 22:59:01.391328 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:59:01.391347 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-05 22:59:01.391359 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-05 22:59:01.391371 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-05 22:59:01.391383 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-05 22:59:01.391420 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-05 22:59:01.391452 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-05 22:59:01.391465 | orchestrator | 2025-07-05 22:59:01.391476 | orchestrator | 2025-07-05 22:59:01.391487 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:59:01.391498 | orchestrator | Saturday 05 July 2025 22:58:58 +0000 (0:00:08.582) 0:01:04.369 ********* 2025-07-05 22:59:01.391509 | orchestrator | =============================================================================== 2025-07-05 22:59:01.391520 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.03s 2025-07-05 22:59:01.391532 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.79s 2025-07-05 22:59:01.391543 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.81s 2025-07-05 22:59:01.391554 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.06s 2025-07-05 22:59:01.391565 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.78s 2025-07-05 22:59:01.391576 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.61s 2025-07-05 22:59:01.391589 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.52s 2025-07-05 22:59:01.391608 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.16s 2025-07-05 22:59:01.391620 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.50s 2025-07-05 22:59:01.391631 | orchestrator | module-load : Load modules ---------------------------------------------- 1.49s 2025-07-05 22:59:01.391644 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.47s 2025-07-05 22:59:01.391662 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.39s 2025-07-05 22:59:01.391682 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.38s 2025-07-05 22:59:01.391714 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.33s 2025-07-05 22:59:01.391735 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.21s 2025-07-05 22:59:01.391748 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2025-07-05 22:59:01.391761 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.69s 2025-07-05 22:59:01.391774 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.64s 2025-07-05 22:59:01.391786 | orchestrator | 2025-07-05 22:59:01 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:01.391800 | orchestrator | 2025-07-05 22:59:01 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:59:01.391812 | orchestrator | 2025-07-05 22:59:01 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:01.391825 | orchestrator | 2025-07-05 22:59:01 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:01.391838 | orchestrator | 2025-07-05 22:59:01 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:01.391850 | orchestrator | 2025-07-05 22:59:01 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:04.430984 | orchestrator | 2025-07-05 22:59:04 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:04.431226 | orchestrator | 2025-07-05 22:59:04 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:59:04.432194 | orchestrator | 2025-07-05 22:59:04 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:04.433055 | orchestrator | 2025-07-05 22:59:04 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:04.436738 | orchestrator | 2025-07-05 22:59:04 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:04.436769 | orchestrator | 2025-07-05 22:59:04 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:07.468007 | orchestrator | 2025-07-05 22:59:07 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:07.468228 | orchestrator | 2025-07-05 22:59:07 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:59:07.469120 | orchestrator | 2025-07-05 22:59:07 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:07.469979 | orchestrator | 2025-07-05 22:59:07 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:07.470591 | orchestrator | 2025-07-05 22:59:07 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:07.470613 | orchestrator | 2025-07-05 22:59:07 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:10.513603 | orchestrator | 2025-07-05 22:59:10 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:10.513707 | orchestrator | 2025-07-05 22:59:10 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:59:10.527388 | orchestrator | 2025-07-05 22:59:10 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:10.536019 | orchestrator | 2025-07-05 22:59:10 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:10.536538 | orchestrator | 2025-07-05 22:59:10 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:10.536610 | orchestrator | 2025-07-05 22:59:10 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:13.575694 | orchestrator | 2025-07-05 22:59:13 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:13.578289 | orchestrator | 2025-07-05 22:59:13 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:59:13.578663 | orchestrator | 2025-07-05 22:59:13 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:13.580778 | orchestrator | 2025-07-05 22:59:13 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:13.582394 | orchestrator | 2025-07-05 22:59:13 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:13.582681 | orchestrator | 2025-07-05 22:59:13 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:16.620326 | orchestrator | 2025-07-05 22:59:16 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:16.620582 | orchestrator | 2025-07-05 22:59:16 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:59:16.622924 | orchestrator | 2025-07-05 22:59:16 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:16.623614 | orchestrator | 2025-07-05 22:59:16 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:16.627999 | orchestrator | 2025-07-05 22:59:16 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:16.628034 | orchestrator | 2025-07-05 22:59:16 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:19.669870 | orchestrator | 2025-07-05 22:59:19 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:19.671303 | orchestrator | 2025-07-05 22:59:19 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:59:19.672477 | orchestrator | 2025-07-05 22:59:19 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:19.673798 | orchestrator | 2025-07-05 22:59:19 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:19.675449 | orchestrator | 2025-07-05 22:59:19 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:19.675526 | orchestrator | 2025-07-05 22:59:19 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:22.708003 | orchestrator | 2025-07-05 22:59:22 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:22.708795 | orchestrator | 2025-07-05 22:59:22 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:59:22.709323 | orchestrator | 2025-07-05 22:59:22 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:22.710169 | orchestrator | 2025-07-05 22:59:22 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:22.711000 | orchestrator | 2025-07-05 22:59:22 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:22.711038 | orchestrator | 2025-07-05 22:59:22 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:25.750764 | orchestrator | 2025-07-05 22:59:25 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:25.751507 | orchestrator | 2025-07-05 22:59:25 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:59:25.753057 | orchestrator | 2025-07-05 22:59:25 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:25.755019 | orchestrator | 2025-07-05 22:59:25 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:25.756654 | orchestrator | 2025-07-05 22:59:25 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:25.757089 | orchestrator | 2025-07-05 22:59:25 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:28.785271 | orchestrator | 2025-07-05 22:59:28 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:28.786331 | orchestrator | 2025-07-05 22:59:28 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state STARTED 2025-07-05 22:59:28.786947 | orchestrator | 2025-07-05 22:59:28 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:28.788693 | orchestrator | 2025-07-05 22:59:28 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:28.790286 | orchestrator | 2025-07-05 22:59:28 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:28.790315 | orchestrator | 2025-07-05 22:59:28 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:31.826695 | orchestrator | 2025-07-05 22:59:31 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:31.826799 | orchestrator | 2025-07-05 22:59:31 | INFO  | Task 9c71b1c1-ee0f-439c-882f-2575bfe47003 is in state SUCCESS 2025-07-05 22:59:31.827463 | orchestrator | 2025-07-05 22:59:31 | INFO  | Task 6c5baccf-77f3-4610-8e83-f4c02b630011 is in state STARTED 2025-07-05 22:59:31.831013 | orchestrator | 2025-07-05 22:59:31.831062 | orchestrator | 2025-07-05 22:59:31.831075 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-07-05 22:59:31.831087 | orchestrator | 2025-07-05 22:59:31.831110 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-07-05 22:59:31.831123 | orchestrator | Saturday 05 July 2025 22:55:12 +0000 (0:00:00.204) 0:00:00.204 ********* 2025-07-05 22:59:31.831134 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:59:31.831147 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:59:31.831158 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:59:31.831169 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:59:31.831180 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:59:31.831190 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:59:31.831201 | orchestrator | 2025-07-05 22:59:31.831213 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-07-05 22:59:31.831224 | orchestrator | Saturday 05 July 2025 22:55:13 +0000 (0:00:00.729) 0:00:00.934 ********* 2025-07-05 22:59:31.831235 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:31.831247 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:31.831258 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:31.831269 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.831280 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.831291 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.831303 | orchestrator | 2025-07-05 22:59:31.831314 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-07-05 22:59:31.831325 | orchestrator | Saturday 05 July 2025 22:55:14 +0000 (0:00:00.715) 0:00:01.650 ********* 2025-07-05 22:59:31.831336 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:31.831347 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:31.831358 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:31.831369 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.831380 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.831391 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.831402 | orchestrator | 2025-07-05 22:59:31.831413 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-07-05 22:59:31.831461 | orchestrator | Saturday 05 July 2025 22:55:15 +0000 (0:00:00.889) 0:00:02.540 ********* 2025-07-05 22:59:31.831473 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:59:31.831484 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:59:31.831495 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:59:31.831526 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:31.831537 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:59:31.831548 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:59:31.831560 | orchestrator | 2025-07-05 22:59:31.831571 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-07-05 22:59:31.831582 | orchestrator | Saturday 05 July 2025 22:55:17 +0000 (0:00:02.035) 0:00:04.575 ********* 2025-07-05 22:59:31.831593 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:59:31.831604 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:59:31.831620 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:59:31.831637 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:31.831653 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:59:31.831664 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:59:31.831675 | orchestrator | 2025-07-05 22:59:31.831686 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-07-05 22:59:31.831697 | orchestrator | Saturday 05 July 2025 22:55:19 +0000 (0:00:02.207) 0:00:06.783 ********* 2025-07-05 22:59:31.831709 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:59:31.831720 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:59:31.831732 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:59:31.831743 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:59:31.831754 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:59:31.831770 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:31.831782 | orchestrator | 2025-07-05 22:59:31.831793 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-07-05 22:59:31.831804 | orchestrator | Saturday 05 July 2025 22:55:21 +0000 (0:00:02.162) 0:00:08.946 ********* 2025-07-05 22:59:31.831816 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:31.831826 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:31.831837 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:31.831848 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.831859 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.831870 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.831881 | orchestrator | 2025-07-05 22:59:31.831893 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-07-05 22:59:31.831904 | orchestrator | Saturday 05 July 2025 22:55:22 +0000 (0:00:00.818) 0:00:09.764 ********* 2025-07-05 22:59:31.831915 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:31.831926 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:31.831937 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:31.831949 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.831959 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.831970 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.831982 | orchestrator | 2025-07-05 22:59:31.831993 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-07-05 22:59:31.832004 | orchestrator | Saturday 05 July 2025 22:55:23 +0000 (0:00:00.748) 0:00:10.513 ********* 2025-07-05 22:59:31.832015 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-05 22:59:31.832026 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-05 22:59:31.832037 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:31.832049 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-05 22:59:31.832060 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-05 22:59:31.832071 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:31.832083 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-05 22:59:31.832094 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-05 22:59:31.832105 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:31.832116 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-05 22:59:31.832142 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-05 22:59:31.832160 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.832172 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-05 22:59:31.832183 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-05 22:59:31.832194 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.832205 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-05 22:59:31.832216 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-05 22:59:31.832227 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.832238 | orchestrator | 2025-07-05 22:59:31.832250 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-07-05 22:59:31.832261 | orchestrator | Saturday 05 July 2025 22:55:23 +0000 (0:00:00.773) 0:00:11.287 ********* 2025-07-05 22:59:31.832272 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:31.832283 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:31.832294 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:31.832306 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.832317 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.832328 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.832339 | orchestrator | 2025-07-05 22:59:31.832350 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-07-05 22:59:31.832363 | orchestrator | Saturday 05 July 2025 22:55:25 +0000 (0:00:01.159) 0:00:12.446 ********* 2025-07-05 22:59:31.832374 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:59:31.832385 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:59:31.832396 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:59:31.832407 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:59:31.832418 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:59:31.832450 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:59:31.832461 | orchestrator | 2025-07-05 22:59:31.832472 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-07-05 22:59:31.832483 | orchestrator | Saturday 05 July 2025 22:55:25 +0000 (0:00:00.612) 0:00:13.059 ********* 2025-07-05 22:59:31.832495 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:31.832506 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:59:31.832517 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:59:31.832528 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:59:31.832539 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:59:31.832550 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:59:31.832561 | orchestrator | 2025-07-05 22:59:31.832572 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-07-05 22:59:31.832584 | orchestrator | Saturday 05 July 2025 22:55:31 +0000 (0:00:06.206) 0:00:19.265 ********* 2025-07-05 22:59:31.832595 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:31.832606 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:31.832617 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:31.832628 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.832639 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.832650 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.832661 | orchestrator | 2025-07-05 22:59:31.832673 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-07-05 22:59:31.832684 | orchestrator | Saturday 05 July 2025 22:55:32 +0000 (0:00:01.080) 0:00:20.346 ********* 2025-07-05 22:59:31.832695 | orchestrator | skipping: [testbed-node-3] 2025-07-05 22:59:31.832706 | orchestrator | skipping: [testbed-node-4] 2025-07-05 22:59:31.832717 | orchestrator | skipping: [testbed-node-5] 2025-07-05 22:59:31.832729 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.832744 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.832756 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.832767 | orchestrator | 2025-07-05 22:59:31.832778 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-07-05 22:59:31.832797 | orchestrator | Saturday 05 July 2025 22:55:34 +0000 (0:00:01.730) 0:00:22.076 ********* 2025-07-05 22:59:31.832808 | orchestrator | ok: [testbed-node-3] 2025-07-05 22:59:31.832820 | orchestrator | ok: [testbed-node-4] 2025-07-05 22:59:31.832831 | orchestrator | ok: [testbed-node-5] 2025-07-05 22:59:31.832842 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:59:31.832853 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:59:31.832864 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:59:31.832875 | orchestrator | 2025-07-05 22:59:31.832887 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-07-05 22:59:31.832898 | orchestrator | Saturday 05 July 2025 22:55:35 +0000 (0:00:01.116) 0:00:23.193 ********* 2025-07-05 22:59:31.832910 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-07-05 22:59:31.832922 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-07-05 22:59:31.832933 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-07-05 22:59:31.832944 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-07-05 22:59:31.832956 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-07-05 22:59:31.832967 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-07-05 22:59:31.832978 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-07-05 22:59:31.833092 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-07-05 22:59:31.833104 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-07-05 22:59:31.833115 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-07-05 22:59:31.833126 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-07-05 22:59:31.833137 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-07-05 22:59:31.833148 | orchestrator | 2025-07-05 22:59:31.833158 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-07-05 22:59:31.833170 | orchestrator | Saturday 05 July 2025 22:55:38 +0000 (0:00:02.304) 0:00:25.497 ********* 2025-07-05 22:59:31.833181 | orchestrator | changed: [testbed-node-3] 2025-07-05 22:59:31.833192 | orchestrator | changed: [testbed-node-5] 2025-07-05 22:59:31.833203 | orchestrator | changed: [testbed-node-4] 2025-07-05 22:59:31.833214 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:31.833225 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:59:31.833236 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:59:31.833246 | orchestrator | 2025-07-05 22:59:31.833267 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-07-05 22:59:31.833279 | orchestrator | 2025-07-05 22:59:31.833290 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-07-05 22:59:31.833301 | orchestrator | Saturday 05 July 2025 22:55:39 +0000 (0:00:01.866) 0:00:27.363 ********* 2025-07-05 22:59:31.833312 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:59:31.833323 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:59:31.833334 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:59:31.833345 | orchestrator | 2025-07-05 22:59:31.833357 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-07-05 22:59:31.833368 | orchestrator | Saturday 05 July 2025 22:55:41 +0000 (0:00:01.043) 0:00:28.407 ********* 2025-07-05 22:59:31.833379 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:59:31.833390 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:59:31.833401 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:59:31.833412 | orchestrator | 2025-07-05 22:59:31.833479 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-07-05 22:59:31.833494 | orchestrator | Saturday 05 July 2025 22:55:42 +0000 (0:00:01.266) 0:00:29.674 ********* 2025-07-05 22:59:31.833505 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:59:31.833516 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:59:31.833527 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:59:31.833538 | orchestrator | 2025-07-05 22:59:31.833549 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-07-05 22:59:31.833560 | orchestrator | Saturday 05 July 2025 22:55:43 +0000 (0:00:01.311) 0:00:30.985 ********* 2025-07-05 22:59:31.833579 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:59:31.833591 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:59:31.833601 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:59:31.833612 | orchestrator | 2025-07-05 22:59:31.833623 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-07-05 22:59:31.833635 | orchestrator | Saturday 05 July 2025 22:55:44 +0000 (0:00:00.712) 0:00:31.698 ********* 2025-07-05 22:59:31.833645 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.833657 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.833668 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.833679 | orchestrator | 2025-07-05 22:59:31.833690 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-07-05 22:59:31.833701 | orchestrator | Saturday 05 July 2025 22:55:44 +0000 (0:00:00.404) 0:00:32.102 ********* 2025-07-05 22:59:31.833714 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 22:59:31.833726 | orchestrator | 2025-07-05 22:59:31.833739 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-07-05 22:59:31.833751 | orchestrator | Saturday 05 July 2025 22:55:45 +0000 (0:00:00.719) 0:00:32.822 ********* 2025-07-05 22:59:31.833764 | orchestrator | ok: [testbed-node-1] 2025-07-05 22:59:31.833776 | orchestrator | ok: [testbed-node-0] 2025-07-05 22:59:31.833788 | orchestrator | ok: [testbed-node-2] 2025-07-05 22:59:31.833800 | orchestrator | 2025-07-05 22:59:31.833812 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-07-05 22:59:31.833825 | orchestrator | Saturday 05 July 2025 22:55:47 +0000 (0:00:02.476) 0:00:35.298 ********* 2025-07-05 22:59:31.833837 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.833850 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:31.833863 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.833875 | orchestrator | 2025-07-05 22:59:31.833888 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-07-05 22:59:31.833900 | orchestrator | Saturday 05 July 2025 22:55:48 +0000 (0:00:00.783) 0:00:36.082 ********* 2025-07-05 22:59:31.833918 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.833931 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.833943 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:31.833956 | orchestrator | 2025-07-05 22:59:31.833968 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-07-05 22:59:31.833981 | orchestrator | Saturday 05 July 2025 22:55:49 +0000 (0:00:01.261) 0:00:37.344 ********* 2025-07-05 22:59:31.833994 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.834006 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.834051 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:31.834066 | orchestrator | 2025-07-05 22:59:31.834079 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-07-05 22:59:31.834090 | orchestrator | Saturday 05 July 2025 22:55:52 +0000 (0:00:02.443) 0:00:39.787 ********* 2025-07-05 22:59:31.834101 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.834112 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.834123 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.834134 | orchestrator | 2025-07-05 22:59:31.834146 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-07-05 22:59:31.834157 | orchestrator | Saturday 05 July 2025 22:55:52 +0000 (0:00:00.383) 0:00:40.170 ********* 2025-07-05 22:59:31.834168 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.834179 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.834190 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.834201 | orchestrator | 2025-07-05 22:59:31.834212 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-07-05 22:59:31.834223 | orchestrator | Saturday 05 July 2025 22:55:53 +0000 (0:00:00.316) 0:00:40.486 ********* 2025-07-05 22:59:31.834234 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:31.834251 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:59:31.834263 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:59:31.834274 | orchestrator | 2025-07-05 22:59:31.834285 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-07-05 22:59:31.834296 | orchestrator | Saturday 05 July 2025 22:55:54 +0000 (0:00:01.858) 0:00:42.345 ********* 2025-07-05 22:59:31.834308 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-05 22:59:31.834319 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-05 22:59:31.834339 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-05 22:59:31.834350 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-05 22:59:31.834361 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-05 22:59:31.834373 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-05 22:59:31.834384 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-05 22:59:31.834395 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-05 22:59:31.834406 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-05 22:59:31.834421 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-05 22:59:31.834462 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-05 22:59:31.834481 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-05 22:59:31.834498 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-05 22:59:31.834515 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-05 22:59:31.834527 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-05 22:59:31.834538 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2025-07-05 22:59:31.834549 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2025-07-05 22:59:31.834560 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2025-07-05 22:59:31.834576 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (14 retries left). 2025-07-05 22:59:31.834588 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (14 retries left). 2025-07-05 22:59:31.834599 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (14 retries left). 2025-07-05 22:59:31.834610 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (13 retries left). 2025-07-05 22:59:31.834629 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (13 retries left). 2025-07-05 22:59:31.834640 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (13 retries left). 2025-07-05 22:59:31.834651 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (12 retries left). 2025-07-05 22:59:31.834662 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (12 retries left). 2025-07-05 22:59:31.834672 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (12 retries left). 2025-07-05 22:59:31.834684 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (11 retries left). 2025-07-05 22:59:31.834695 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (11 retries left). 2025-07-05 22:59:31.834706 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (11 retries left). 2025-07-05 22:59:31.834723 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (10 retries left). 2025-07-05 22:59:31.834735 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (10 retries left). 2025-07-05 22:59:31.834746 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (10 retries left). 2025-07-05 22:59:31.834757 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (9 retries left). 2025-07-05 22:59:31.834768 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (9 retries left). 2025-07-05 22:59:31.834780 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (9 retries left). 2025-07-05 22:59:31.834790 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (8 retries left). 2025-07-05 22:59:31.834801 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (8 retries left). 2025-07-05 22:59:31.834812 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (8 retries left). 2025-07-05 22:59:31.834823 | orchestrator | 2025-07-05 22:59:31.834834 | orchestrator | STILL ALIVE [task 'k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)' is running] *** 2025-07-05 22:59:31.834845 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (7 retries left). 2025-07-05 22:59:31.834856 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (7 retries left). 2025-07-05 22:59:31.834867 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (7 retries left). 2025-07-05 22:59:31.834878 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (6 retries left). 2025-07-05 22:59:31.834889 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (6 retries left). 2025-07-05 22:59:31.834906 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (6 retries left). 2025-07-05 22:59:31.834917 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (5 retries left). 2025-07-05 22:59:31.834932 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (5 retries left). 2025-07-05 22:59:31.834943 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (5 retries left). 2025-07-05 22:59:31.834955 | orchestrator | 2025-07-05 22:59:31.834966 | orchestrator | STILL ALIVE [task 'k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)' is running] *** 2025-07-05 22:59:31.834977 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (4 retries left). 2025-07-05 22:59:31.834988 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (4 retries left). 2025-07-05 22:59:31.834999 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (4 retries left). 2025-07-05 22:59:31.835125 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (3 retries left). 2025-07-05 22:59:31.835138 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (3 retries left). 2025-07-05 22:59:31.835150 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (3 retries left). 2025-07-05 22:59:31.835161 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (2 retries left). 2025-07-05 22:59:31.835172 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (2 retries left). 2025-07-05 22:59:31.835184 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (2 retries left). 2025-07-05 22:59:31.835195 | orchestrator | 2025-07-05 22:59:31.835213 | orchestrator | STILL ALIVE [task 'k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)' is running] *** 2025-07-05 22:59:31.835225 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (1 retries left). 2025-07-05 22:59:31.835236 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (1 retries left). 2025-07-05 22:59:31.835248 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (1 retries left). 2025-07-05 22:59:31.835271 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 20, "changed": false, "cmd": ["k3s", "kubectl", "get", "nodes", "-l", "node-role.kubernetes.io/master=true", "-o=jsonpath={.items[*].metadata.name}"], "delta": "0:00:00.226897", "end": "2025-07-05 22:59:27.270959", "msg": "non-zero return code", "rc": 1, "start": "2025-07-05 22:59:27.044062", "stderr": "E0705 22:59:27.253449 16178 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:27.255231 16178 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:27.256838 16178 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:27.258792 16178 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:27.260563 16178 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?", "stderr_lines": ["E0705 22:59:27.253449 16178 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:27.255231 16178 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:27.256838 16178 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:27.258792 16178 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:27.260563 16178 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "The connection to the server localhost:8080 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []} 2025-07-05 22:59:31.835316 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"attempts": 20, "changed": false, "cmd": ["k3s", "kubectl", "get", "nodes", "-l", "node-role.kubernetes.io/master=true", "-o=jsonpath={.items[*].metadata.name}"], "delta": "0:00:00.215256", "end": "2025-07-05 22:59:27.885550", "msg": "non-zero return code", "rc": 1, "start": "2025-07-05 22:59:27.670294", "stderr": "E0705 22:59:27.864500 13792 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:27.866609 13792 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:27.868348 13792 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:27.870012 13792 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:27.871864 13792 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?", "stderr_lines": ["E0705 22:59:27.864500 13792 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:27.866609 13792 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:27.868348 13792 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:27.870012 13792 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:27.871864 13792 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "The connection to the server localhost:8080 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []} 2025-07-05 22:59:31.835352 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"attempts": 20, "changed": false, "cmd": ["k3s", "kubectl", "get", "nodes", "-l", "node-role.kubernetes.io/master=true", "-o=jsonpath={.items[*].metadata.name}"], "delta": "0:00:00.191806", "end": "2025-07-05 22:59:28.170344", "msg": "non-zero return code", "rc": 1, "start": "2025-07-05 22:59:27.978538", "stderr": "E0705 22:59:28.155332 13796 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:28.157661 13796 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:28.159670 13796 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:28.161685 13796 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nE0705 22:59:28.163422 13796 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?", "stderr_lines": ["E0705 22:59:28.155332 13796 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:28.157661 13796 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:28.159670 13796 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:28.161685 13796 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "E0705 22:59:28.163422 13796 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"http://localhost:8080/api?timeout=32s\\\": dial tcp [::1]:8080: connect: connection refused\"", "The connection to the server localhost:8080 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []} 2025-07-05 22:59:31.835366 | orchestrator | 2025-07-05 22:59:31.835377 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-07-05 22:59:31.835389 | orchestrator | Saturday 05 July 2025 22:59:28 +0000 (0:03:33.344) 0:04:15.690 ********* 2025-07-05 22:59:31.835401 | orchestrator | skipping: [testbed-node-0] 2025-07-05 22:59:31.835411 | orchestrator | skipping: [testbed-node-1] 2025-07-05 22:59:31.835492 | orchestrator | skipping: [testbed-node-2] 2025-07-05 22:59:31.835515 | orchestrator | 2025-07-05 22:59:31.835533 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-07-05 22:59:31.835557 | orchestrator | Saturday 05 July 2025 22:59:28 +0000 (0:00:00.316) 0:04:16.007 ********* 2025-07-05 22:59:31.835569 | orchestrator | changed: [testbed-node-0] 2025-07-05 22:59:31.835579 | orchestrator | changed: [testbed-node-1] 2025-07-05 22:59:31.835590 | orchestrator | changed: [testbed-node-2] 2025-07-05 22:59:31.835601 | orchestrator | 2025-07-05 22:59:31.835612 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 22:59:31.835624 | orchestrator | testbed-node-0 : ok=20  changed=11  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2025-07-05 22:59:31.835637 | orchestrator | testbed-node-1 : ok=17  changed=8  unreachable=0 failed=1  skipped=15  rescued=0 ignored=0 2025-07-05 22:59:31.835649 | orchestrator | testbed-node-2 : ok=17  changed=8  unreachable=0 failed=1  skipped=15  rescued=0 ignored=0 2025-07-05 22:59:31.835660 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-05 22:59:31.835672 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-05 22:59:31.835688 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-05 22:59:31.835700 | orchestrator | 2025-07-05 22:59:31.835711 | orchestrator | 2025-07-05 22:59:31.835722 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 22:59:31.835733 | orchestrator | Saturday 05 July 2025 22:59:29 +0000 (0:00:01.313) 0:04:17.320 ********* 2025-07-05 22:59:31.835744 | orchestrator | =============================================================================== 2025-07-05 22:59:31.835757 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) - 213.34s 2025-07-05 22:59:31.835770 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.21s 2025-07-05 22:59:31.835782 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.48s 2025-07-05 22:59:31.835794 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.44s 2025-07-05 22:59:31.835806 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.30s 2025-07-05 22:59:31.835818 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.21s 2025-07-05 22:59:31.835830 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.16s 2025-07-05 22:59:31.835843 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.04s 2025-07-05 22:59:31.835856 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.87s 2025-07-05 22:59:31.835867 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.86s 2025-07-05 22:59:31.835878 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.73s 2025-07-05 22:59:31.835888 | orchestrator | k3s_server : Kill the temporary service used for initialization --------- 1.31s 2025-07-05 22:59:31.835899 | orchestrator | k3s_server : Stop k3s --------------------------------------------------- 1.31s 2025-07-05 22:59:31.835910 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.27s 2025-07-05 22:59:31.835921 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 1.26s 2025-07-05 22:59:31.835932 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.16s 2025-07-05 22:59:31.835943 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.12s 2025-07-05 22:59:31.835961 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.08s 2025-07-05 22:59:31.835972 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.04s 2025-07-05 22:59:31.835990 | orchestrator | k3s_prereq : Set SELinux to disabled state ------------------------------ 0.89s 2025-07-05 22:59:31.836000 | orchestrator | 2025-07-05 22:59:31 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:31.836010 | orchestrator | 2025-07-05 22:59:31 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:31.836020 | orchestrator | 2025-07-05 22:59:31 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:31.836030 | orchestrator | 2025-07-05 22:59:31 | INFO  | Task 2be37331-e1f7-469b-9f89-a97b7d375fc8 is in state STARTED 2025-07-05 22:59:31.836040 | orchestrator | 2025-07-05 22:59:31 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:34.864056 | orchestrator | 2025-07-05 22:59:34 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:34.865940 | orchestrator | 2025-07-05 22:59:34 | INFO  | Task 6c5baccf-77f3-4610-8e83-f4c02b630011 is in state STARTED 2025-07-05 22:59:34.867636 | orchestrator | 2025-07-05 22:59:34 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:34.869214 | orchestrator | 2025-07-05 22:59:34 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:34.870890 | orchestrator | 2025-07-05 22:59:34 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:34.872205 | orchestrator | 2025-07-05 22:59:34 | INFO  | Task 2be37331-e1f7-469b-9f89-a97b7d375fc8 is in state STARTED 2025-07-05 22:59:34.872536 | orchestrator | 2025-07-05 22:59:34 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:37.901162 | orchestrator | 2025-07-05 22:59:37 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:37.902181 | orchestrator | 2025-07-05 22:59:37 | INFO  | Task 6c5baccf-77f3-4610-8e83-f4c02b630011 is in state SUCCESS 2025-07-05 22:59:37.903513 | orchestrator | 2025-07-05 22:59:37 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:37.905656 | orchestrator | 2025-07-05 22:59:37 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:37.908155 | orchestrator | 2025-07-05 22:59:37 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:37.908661 | orchestrator | 2025-07-05 22:59:37 | INFO  | Task 2be37331-e1f7-469b-9f89-a97b7d375fc8 is in state SUCCESS 2025-07-05 22:59:37.908920 | orchestrator | 2025-07-05 22:59:37 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:40.950405 | orchestrator | 2025-07-05 22:59:40 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:40.950555 | orchestrator | 2025-07-05 22:59:40 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:40.951153 | orchestrator | 2025-07-05 22:59:40 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:40.952276 | orchestrator | 2025-07-05 22:59:40 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:40.952389 | orchestrator | 2025-07-05 22:59:40 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:43.989737 | orchestrator | 2025-07-05 22:59:43 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:43.989841 | orchestrator | 2025-07-05 22:59:43 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:43.990509 | orchestrator | 2025-07-05 22:59:43 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:43.990943 | orchestrator | 2025-07-05 22:59:43 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:43.990978 | orchestrator | 2025-07-05 22:59:43 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:47.041321 | orchestrator | 2025-07-05 22:59:47 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:47.042196 | orchestrator | 2025-07-05 22:59:47 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:47.044891 | orchestrator | 2025-07-05 22:59:47 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:47.046720 | orchestrator | 2025-07-05 22:59:47 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:47.046955 | orchestrator | 2025-07-05 22:59:47 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:50.088231 | orchestrator | 2025-07-05 22:59:50 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:50.088331 | orchestrator | 2025-07-05 22:59:50 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:50.089186 | orchestrator | 2025-07-05 22:59:50 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:50.089912 | orchestrator | 2025-07-05 22:59:50 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:50.090077 | orchestrator | 2025-07-05 22:59:50 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:53.137015 | orchestrator | 2025-07-05 22:59:53 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:53.140209 | orchestrator | 2025-07-05 22:59:53 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:53.142002 | orchestrator | 2025-07-05 22:59:53 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:53.143956 | orchestrator | 2025-07-05 22:59:53 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:53.144413 | orchestrator | 2025-07-05 22:59:53 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:56.197951 | orchestrator | 2025-07-05 22:59:56 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:56.199994 | orchestrator | 2025-07-05 22:59:56 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:56.200737 | orchestrator | 2025-07-05 22:59:56 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:56.202408 | orchestrator | 2025-07-05 22:59:56 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:56.202434 | orchestrator | 2025-07-05 22:59:56 | INFO  | Wait 1 second(s) until the next check 2025-07-05 22:59:59.263316 | orchestrator | 2025-07-05 22:59:59 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 22:59:59.266294 | orchestrator | 2025-07-05 22:59:59 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 22:59:59.268528 | orchestrator | 2025-07-05 22:59:59 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 22:59:59.270992 | orchestrator | 2025-07-05 22:59:59 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 22:59:59.271678 | orchestrator | 2025-07-05 22:59:59 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:02.319831 | orchestrator | 2025-07-05 23:00:02 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 23:00:02.322184 | orchestrator | 2025-07-05 23:00:02 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:02.323644 | orchestrator | 2025-07-05 23:00:02 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:02.325785 | orchestrator | 2025-07-05 23:00:02 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:02.325812 | orchestrator | 2025-07-05 23:00:02 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:05.376370 | orchestrator | 2025-07-05 23:00:05 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 23:00:05.376786 | orchestrator | 2025-07-05 23:00:05 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:05.377682 | orchestrator | 2025-07-05 23:00:05 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:05.379917 | orchestrator | 2025-07-05 23:00:05 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:05.379945 | orchestrator | 2025-07-05 23:00:05 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:08.425778 | orchestrator | 2025-07-05 23:00:08 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 23:00:08.428437 | orchestrator | 2025-07-05 23:00:08 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:08.432375 | orchestrator | 2025-07-05 23:00:08 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:08.434710 | orchestrator | 2025-07-05 23:00:08 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:08.434958 | orchestrator | 2025-07-05 23:00:08 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:11.477220 | orchestrator | 2025-07-05 23:00:11 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 23:00:11.480020 | orchestrator | 2025-07-05 23:00:11 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:11.484078 | orchestrator | 2025-07-05 23:00:11 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:11.489523 | orchestrator | 2025-07-05 23:00:11 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:11.489599 | orchestrator | 2025-07-05 23:00:11 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:14.534153 | orchestrator | 2025-07-05 23:00:14 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 23:00:14.534256 | orchestrator | 2025-07-05 23:00:14 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:14.534977 | orchestrator | 2025-07-05 23:00:14 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:14.535971 | orchestrator | 2025-07-05 23:00:14 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:14.535989 | orchestrator | 2025-07-05 23:00:14 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:17.570915 | orchestrator | 2025-07-05 23:00:17 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 23:00:17.571578 | orchestrator | 2025-07-05 23:00:17 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:17.572070 | orchestrator | 2025-07-05 23:00:17 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:17.572872 | orchestrator | 2025-07-05 23:00:17 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:17.573110 | orchestrator | 2025-07-05 23:00:17 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:20.611656 | orchestrator | 2025-07-05 23:00:20 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 23:00:20.611772 | orchestrator | 2025-07-05 23:00:20 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:20.612703 | orchestrator | 2025-07-05 23:00:20 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:20.613418 | orchestrator | 2025-07-05 23:00:20 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:20.613657 | orchestrator | 2025-07-05 23:00:20 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:23.651941 | orchestrator | 2025-07-05 23:00:23 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 23:00:23.653386 | orchestrator | 2025-07-05 23:00:23 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:23.654680 | orchestrator | 2025-07-05 23:00:23 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:23.656133 | orchestrator | 2025-07-05 23:00:23 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:23.656355 | orchestrator | 2025-07-05 23:00:23 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:26.692637 | orchestrator | 2025-07-05 23:00:26 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state STARTED 2025-07-05 23:00:26.693800 | orchestrator | 2025-07-05 23:00:26 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:26.695943 | orchestrator | 2025-07-05 23:00:26 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:26.697688 | orchestrator | 2025-07-05 23:00:26 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:26.697728 | orchestrator | 2025-07-05 23:00:26 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:29.735665 | orchestrator | 2025-07-05 23:00:29 | INFO  | Task c687469d-6c99-4b64-bb10-dd48064c5a62 is in state SUCCESS 2025-07-05 23:00:29.737165 | orchestrator | 2025-07-05 23:00:29.737210 | orchestrator | 2025-07-05 23:00:29.737223 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-05 23:00:29.737236 | orchestrator | 2025-07-05 23:00:29.737248 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-05 23:00:29.737260 | orchestrator | Saturday 05 July 2025 22:59:34 +0000 (0:00:00.169) 0:00:00.169 ********* 2025-07-05 23:00:29.737272 | orchestrator | ok: [testbed-manager] 2025-07-05 23:00:29.737284 | orchestrator | 2025-07-05 23:00:29.737296 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-05 23:00:29.737307 | orchestrator | Saturday 05 July 2025 22:59:35 +0000 (0:00:00.791) 0:00:00.960 ********* 2025-07-05 23:00:29.737319 | orchestrator | changed: [testbed-manager] 2025-07-05 23:00:29.737330 | orchestrator | 2025-07-05 23:00:29.737342 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-05 23:00:29.737354 | orchestrator | Saturday 05 July 2025 22:59:35 +0000 (0:00:00.574) 0:00:01.534 ********* 2025-07-05 23:00:29.737365 | orchestrator | fatal: [testbed-manager -> testbed-node-0(192.168.16.10)]: FAILED! => {"changed": false, "msg": "file not found: /etc/rancher/k3s/k3s.yaml"} 2025-07-05 23:00:29.737378 | orchestrator | 2025-07-05 23:00:29.737389 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:00:29.737402 | orchestrator | testbed-manager : ok=2  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-05 23:00:29.737630 | orchestrator | 2025-07-05 23:00:29.737651 | orchestrator | 2025-07-05 23:00:29.737662 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:00:29.737674 | orchestrator | Saturday 05 July 2025 22:59:36 +0000 (0:00:00.633) 0:00:02.168 ********* 2025-07-05 23:00:29.737711 | orchestrator | =============================================================================== 2025-07-05 23:00:29.737723 | orchestrator | Get home directory of operator user ------------------------------------- 0.79s 2025-07-05 23:00:29.737734 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.63s 2025-07-05 23:00:29.737745 | orchestrator | Create .kube directory -------------------------------------------------- 0.57s 2025-07-05 23:00:29.737756 | orchestrator | 2025-07-05 23:00:29.737767 | orchestrator | 2025-07-05 23:00:29.737778 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-07-05 23:00:29.737789 | orchestrator | 2025-07-05 23:00:29.737800 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-05 23:00:29.737811 | orchestrator | Saturday 05 July 2025 22:59:34 +0000 (0:00:00.120) 0:00:00.120 ********* 2025-07-05 23:00:29.737822 | orchestrator | fatal: [testbed-manager -> testbed-node-0(192.168.16.10)]: FAILED! => {"changed": false, "msg": "file not found: /etc/rancher/k3s/k3s.yaml"} 2025-07-05 23:00:29.737834 | orchestrator | 2025-07-05 23:00:29.737845 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:00:29.737856 | orchestrator | testbed-manager : ok=0 changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-05 23:00:29.737868 | orchestrator | 2025-07-05 23:00:29.737879 | orchestrator | 2025-07-05 23:00:29.737890 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:00:29.737901 | orchestrator | Saturday 05 July 2025 22:59:35 +0000 (0:00:00.558) 0:00:00.678 ********* 2025-07-05 23:00:29.737912 | orchestrator | =============================================================================== 2025-07-05 23:00:29.737923 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.56s 2025-07-05 23:00:29.737934 | orchestrator | 2025-07-05 23:00:29.737945 | orchestrator | 2025-07-05 23:00:29.737956 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-07-05 23:00:29.737967 | orchestrator | 2025-07-05 23:00:29.737978 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-05 23:00:29.737990 | orchestrator | Saturday 05 July 2025 22:58:14 +0000 (0:00:00.246) 0:00:00.246 ********* 2025-07-05 23:00:29.738001 | orchestrator | ok: [localhost] => { 2025-07-05 23:00:29.738012 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-07-05 23:00:29.738117 | orchestrator | } 2025-07-05 23:00:29.738130 | orchestrator | 2025-07-05 23:00:29.738141 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-07-05 23:00:29.738152 | orchestrator | Saturday 05 July 2025 22:58:14 +0000 (0:00:00.071) 0:00:00.318 ********* 2025-07-05 23:00:29.738164 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-07-05 23:00:29.738176 | orchestrator | ...ignoring 2025-07-05 23:00:29.738188 | orchestrator | 2025-07-05 23:00:29.738200 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-07-05 23:00:29.738211 | orchestrator | Saturday 05 July 2025 22:58:17 +0000 (0:00:03.252) 0:00:03.570 ********* 2025-07-05 23:00:29.738222 | orchestrator | skipping: [localhost] 2025-07-05 23:00:29.738233 | orchestrator | 2025-07-05 23:00:29.738244 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-07-05 23:00:29.738256 | orchestrator | Saturday 05 July 2025 22:58:17 +0000 (0:00:00.043) 0:00:03.614 ********* 2025-07-05 23:00:29.738269 | orchestrator | ok: [localhost] 2025-07-05 23:00:29.738282 | orchestrator | 2025-07-05 23:00:29.738295 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:00:29.738308 | orchestrator | 2025-07-05 23:00:29.738321 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:00:29.738334 | orchestrator | Saturday 05 July 2025 22:58:17 +0000 (0:00:00.127) 0:00:03.741 ********* 2025-07-05 23:00:29.738356 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:00:29.738369 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:00:29.738382 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:00:29.738395 | orchestrator | 2025-07-05 23:00:29.738408 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:00:29.738436 | orchestrator | Saturday 05 July 2025 22:58:18 +0000 (0:00:00.269) 0:00:04.010 ********* 2025-07-05 23:00:29.738450 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-07-05 23:00:29.738463 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-07-05 23:00:29.738496 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-07-05 23:00:29.738509 | orchestrator | 2025-07-05 23:00:29.738521 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-07-05 23:00:29.738534 | orchestrator | 2025-07-05 23:00:29.738546 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-05 23:00:29.738559 | orchestrator | Saturday 05 July 2025 22:58:18 +0000 (0:00:00.472) 0:00:04.483 ********* 2025-07-05 23:00:29.738572 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:00:29.738585 | orchestrator | 2025-07-05 23:00:29.738597 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-05 23:00:29.738611 | orchestrator | Saturday 05 July 2025 22:58:18 +0000 (0:00:00.490) 0:00:04.974 ********* 2025-07-05 23:00:29.738622 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:00:29.738633 | orchestrator | 2025-07-05 23:00:29.738644 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-07-05 23:00:29.738656 | orchestrator | Saturday 05 July 2025 22:58:19 +0000 (0:00:00.922) 0:00:05.896 ********* 2025-07-05 23:00:29.738667 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:00:29.738678 | orchestrator | 2025-07-05 23:00:29.738689 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-07-05 23:00:29.738700 | orchestrator | Saturday 05 July 2025 22:58:20 +0000 (0:00:00.391) 0:00:06.287 ********* 2025-07-05 23:00:29.738710 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:00:29.738721 | orchestrator | 2025-07-05 23:00:29.738733 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-07-05 23:00:29.739433 | orchestrator | Saturday 05 July 2025 22:58:20 +0000 (0:00:00.435) 0:00:06.723 ********* 2025-07-05 23:00:29.739451 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:00:29.739463 | orchestrator | 2025-07-05 23:00:29.739474 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-07-05 23:00:29.739503 | orchestrator | Saturday 05 July 2025 22:58:21 +0000 (0:00:00.437) 0:00:07.161 ********* 2025-07-05 23:00:29.739515 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:00:29.739526 | orchestrator | 2025-07-05 23:00:29.739537 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-05 23:00:29.739548 | orchestrator | Saturday 05 July 2025 22:58:21 +0000 (0:00:00.578) 0:00:07.740 ********* 2025-07-05 23:00:29.739559 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:00:29.739570 | orchestrator | 2025-07-05 23:00:29.739582 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-05 23:00:29.739593 | orchestrator | Saturday 05 July 2025 22:58:22 +0000 (0:00:00.925) 0:00:08.666 ********* 2025-07-05 23:00:29.739604 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:00:29.739615 | orchestrator | 2025-07-05 23:00:29.739626 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-07-05 23:00:29.739637 | orchestrator | Saturday 05 July 2025 22:58:23 +0000 (0:00:01.005) 0:00:09.671 ********* 2025-07-05 23:00:29.739648 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:00:29.739659 | orchestrator | 2025-07-05 23:00:29.739670 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-07-05 23:00:29.739681 | orchestrator | Saturday 05 July 2025 22:58:24 +0000 (0:00:00.671) 0:00:10.342 ********* 2025-07-05 23:00:29.739702 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:00:29.739713 | orchestrator | 2025-07-05 23:00:29.739725 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-07-05 23:00:29.739736 | orchestrator | Saturday 05 July 2025 22:58:24 +0000 (0:00:00.398) 0:00:10.741 ********* 2025-07-05 23:00:29.739750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:00:29.739786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:00:29.739802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:00:29.739815 | orchestrator | 2025-07-05 23:00:29.739826 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-07-05 23:00:29.739838 | orchestrator | Saturday 05 July 2025 22:58:25 +0000 (0:00:01.220) 0:00:11.962 ********* 2025-07-05 23:00:29.739851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:00:29.739875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:00:29.739897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:00:29.739910 | orchestrator | 2025-07-05 23:00:29.739921 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-07-05 23:00:29.739932 | orchestrator | Saturday 05 July 2025 22:58:29 +0000 (0:00:03.437) 0:00:15.399 ********* 2025-07-05 23:00:29.739943 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-05 23:00:29.739954 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-05 23:00:29.739966 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-05 23:00:29.739977 | orchestrator | 2025-07-05 23:00:29.739988 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-07-05 23:00:29.739999 | orchestrator | Saturday 05 July 2025 22:58:30 +0000 (0:00:01.520) 0:00:16.920 ********* 2025-07-05 23:00:29.740010 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-05 23:00:29.740021 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-05 23:00:29.740039 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-05 23:00:29.740050 | orchestrator | 2025-07-05 23:00:29.740061 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-07-05 23:00:29.740072 | orchestrator | Saturday 05 July 2025 22:58:32 +0000 (0:00:01.851) 0:00:18.771 ********* 2025-07-05 23:00:29.740084 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-05 23:00:29.740095 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-05 23:00:29.740106 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-05 23:00:29.740117 | orchestrator | 2025-07-05 23:00:29.740128 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-07-05 23:00:29.740139 | orchestrator | Saturday 05 July 2025 22:58:34 +0000 (0:00:01.301) 0:00:20.073 ********* 2025-07-05 23:00:29.740150 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-05 23:00:29.740161 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-05 23:00:29.740172 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-05 23:00:29.740184 | orchestrator | 2025-07-05 23:00:29.740195 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-07-05 23:00:29.740206 | orchestrator | Saturday 05 July 2025 22:58:35 +0000 (0:00:01.741) 0:00:21.815 ********* 2025-07-05 23:00:29.740217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-05 23:00:29.740228 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-05 23:00:29.740239 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-05 23:00:29.740250 | orchestrator | 2025-07-05 23:00:29.740261 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-07-05 23:00:29.740272 | orchestrator | Saturday 05 July 2025 22:58:37 +0000 (0:00:01.771) 0:00:23.587 ********* 2025-07-05 23:00:29.740283 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-05 23:00:29.740294 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-05 23:00:29.740305 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-05 23:00:29.740316 | orchestrator | 2025-07-05 23:00:29.740327 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-05 23:00:29.740343 | orchestrator | Saturday 05 July 2025 22:58:39 +0000 (0:00:01.794) 0:00:25.382 ********* 2025-07-05 23:00:29.740354 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:00:29.740366 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:00:29.740377 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:00:29.740388 | orchestrator | 2025-07-05 23:00:29.740399 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-07-05 23:00:29.740417 | orchestrator | Saturday 05 July 2025 22:58:39 +0000 (0:00:00.434) 0:00:25.816 ********* 2025-07-05 23:00:29.740429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:00:29.740452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:00:29.740466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:00:29.740507 | orchestrator | 2025-07-05 23:00:29.740519 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-07-05 23:00:29.740530 | orchestrator | Saturday 05 July 2025 22:58:41 +0000 (0:00:01.517) 0:00:27.333 ********* 2025-07-05 23:00:29.740541 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:00:29.740552 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:00:29.740563 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:00:29.740574 | orchestrator | 2025-07-05 23:00:29.740585 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-07-05 23:00:29.740596 | orchestrator | Saturday 05 July 2025 22:58:42 +0000 (0:00:00.836) 0:00:28.170 ********* 2025-07-05 23:00:29.740607 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:00:29.740617 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:00:29.740628 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:00:29.740639 | orchestrator | 2025-07-05 23:00:29.740655 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-07-05 23:00:29.740666 | orchestrator | Saturday 05 July 2025 22:58:49 +0000 (0:00:07.363) 0:00:35.533 ********* 2025-07-05 23:00:29.740677 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:00:29.740688 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:00:29.740699 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:00:29.740710 | orchestrator | 2025-07-05 23:00:29.740727 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-05 23:00:29.740739 | orchestrator | 2025-07-05 23:00:29.740750 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-05 23:00:29.740768 | orchestrator | Saturday 05 July 2025 22:58:50 +0000 (0:00:00.658) 0:00:36.192 ********* 2025-07-05 23:00:29.740780 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:00:29.740791 | orchestrator | 2025-07-05 23:00:29.740802 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-05 23:00:29.740813 | orchestrator | Saturday 05 July 2025 22:58:51 +0000 (0:00:00.878) 0:00:37.070 ********* 2025-07-05 23:00:29.740824 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:00:29.740835 | orchestrator | 2025-07-05 23:00:29.740846 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-05 23:00:29.740857 | orchestrator | Saturday 05 July 2025 22:58:51 +0000 (0:00:00.401) 0:00:37.472 ********* 2025-07-05 23:00:29.740868 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:00:29.740879 | orchestrator | 2025-07-05 23:00:29.740891 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-05 23:00:29.740902 | orchestrator | Saturday 05 July 2025 22:58:53 +0000 (0:00:02.029) 0:00:39.501 ********* 2025-07-05 23:00:29.740913 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:00:29.740924 | orchestrator | 2025-07-05 23:00:29.740935 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-05 23:00:29.740946 | orchestrator | 2025-07-05 23:00:29.740957 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-05 23:00:29.740968 | orchestrator | Saturday 05 July 2025 22:59:48 +0000 (0:00:55.133) 0:01:34.634 ********* 2025-07-05 23:00:29.740979 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:00:29.740990 | orchestrator | 2025-07-05 23:00:29.741002 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-05 23:00:29.741013 | orchestrator | Saturday 05 July 2025 22:59:49 +0000 (0:00:00.607) 0:01:35.242 ********* 2025-07-05 23:00:29.741024 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:00:29.741035 | orchestrator | 2025-07-05 23:00:29.741046 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-05 23:00:29.741057 | orchestrator | Saturday 05 July 2025 22:59:49 +0000 (0:00:00.399) 0:01:35.641 ********* 2025-07-05 23:00:29.741068 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:00:29.741079 | orchestrator | 2025-07-05 23:00:29.741090 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-05 23:00:29.741101 | orchestrator | Saturday 05 July 2025 22:59:51 +0000 (0:00:01.798) 0:01:37.440 ********* 2025-07-05 23:00:29.741112 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:00:29.741123 | orchestrator | 2025-07-05 23:00:29.741134 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-05 23:00:29.741145 | orchestrator | 2025-07-05 23:00:29.741156 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-05 23:00:29.741190 | orchestrator | Saturday 05 July 2025 23:00:06 +0000 (0:00:15.099) 0:01:52.539 ********* 2025-07-05 23:00:29.741203 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:00:29.741214 | orchestrator | 2025-07-05 23:00:29.741225 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-05 23:00:29.741236 | orchestrator | Saturday 05 July 2025 23:00:07 +0000 (0:00:00.636) 0:01:53.175 ********* 2025-07-05 23:00:29.741247 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:00:29.741258 | orchestrator | 2025-07-05 23:00:29.741269 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-05 23:00:29.741280 | orchestrator | Saturday 05 July 2025 23:00:07 +0000 (0:00:00.309) 0:01:53.485 ********* 2025-07-05 23:00:29.741291 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:00:29.741303 | orchestrator | 2025-07-05 23:00:29.741314 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-05 23:00:29.741325 | orchestrator | Saturday 05 July 2025 23:00:09 +0000 (0:00:01.579) 0:01:55.065 ********* 2025-07-05 23:00:29.741336 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:00:29.741347 | orchestrator | 2025-07-05 23:00:29.741358 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-07-05 23:00:29.741376 | orchestrator | 2025-07-05 23:00:29.741388 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-07-05 23:00:29.741399 | orchestrator | Saturday 05 July 2025 23:00:24 +0000 (0:00:15.658) 0:02:10.723 ********* 2025-07-05 23:00:29.741410 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:00:29.741421 | orchestrator | 2025-07-05 23:00:29.741432 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-07-05 23:00:29.741444 | orchestrator | Saturday 05 July 2025 23:00:25 +0000 (0:00:00.756) 0:02:11.480 ********* 2025-07-05 23:00:29.741455 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-05 23:00:29.741465 | orchestrator | enable_outward_rabbitmq_True 2025-07-05 23:00:29.741498 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-05 23:00:29.741510 | orchestrator | outward_rabbitmq_restart 2025-07-05 23:00:29.741521 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:00:29.741533 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:00:29.741544 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:00:29.741555 | orchestrator | 2025-07-05 23:00:29.741566 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-07-05 23:00:29.741578 | orchestrator | skipping: no hosts matched 2025-07-05 23:00:29.741588 | orchestrator | 2025-07-05 23:00:29.741599 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-07-05 23:00:29.741611 | orchestrator | skipping: no hosts matched 2025-07-05 23:00:29.741621 | orchestrator | 2025-07-05 23:00:29.741633 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-07-05 23:00:29.741644 | orchestrator | skipping: no hosts matched 2025-07-05 23:00:29.741655 | orchestrator | 2025-07-05 23:00:29.741666 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:00:29.741683 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-05 23:00:29.741702 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-05 23:00:29.741714 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:00:29.741725 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:00:29.741736 | orchestrator | 2025-07-05 23:00:29.741747 | orchestrator | 2025-07-05 23:00:29.741759 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:00:29.741770 | orchestrator | Saturday 05 July 2025 23:00:28 +0000 (0:00:02.887) 0:02:14.368 ********* 2025-07-05 23:00:29.741781 | orchestrator | =============================================================================== 2025-07-05 23:00:29.741792 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 85.89s 2025-07-05 23:00:29.741803 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.36s 2025-07-05 23:00:29.741814 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.41s 2025-07-05 23:00:29.741825 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.44s 2025-07-05 23:00:29.741836 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.25s 2025-07-05 23:00:29.741847 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.89s 2025-07-05 23:00:29.741858 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.12s 2025-07-05 23:00:29.741869 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.85s 2025-07-05 23:00:29.741880 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.79s 2025-07-05 23:00:29.741891 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.77s 2025-07-05 23:00:29.741918 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.74s 2025-07-05 23:00:29.741929 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.52s 2025-07-05 23:00:29.741940 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.52s 2025-07-05 23:00:29.741958 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.30s 2025-07-05 23:00:29.741977 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.22s 2025-07-05 23:00:29.741995 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.11s 2025-07-05 23:00:29.742013 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2025-07-05 23:00:29.742127 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.93s 2025-07-05 23:00:29.742147 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.92s 2025-07-05 23:00:29.742165 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.84s 2025-07-05 23:00:29.742185 | orchestrator | 2025-07-05 23:00:29 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:29.742204 | orchestrator | 2025-07-05 23:00:29 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:29.742219 | orchestrator | 2025-07-05 23:00:29 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:29.742231 | orchestrator | 2025-07-05 23:00:29 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:32.774308 | orchestrator | 2025-07-05 23:00:32 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:32.774617 | orchestrator | 2025-07-05 23:00:32 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:32.775571 | orchestrator | 2025-07-05 23:00:32 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:32.775599 | orchestrator | 2025-07-05 23:00:32 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:35.825978 | orchestrator | 2025-07-05 23:00:35 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:35.826177 | orchestrator | 2025-07-05 23:00:35 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:35.826193 | orchestrator | 2025-07-05 23:00:35 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:35.826206 | orchestrator | 2025-07-05 23:00:35 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:38.863581 | orchestrator | 2025-07-05 23:00:38 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:38.864627 | orchestrator | 2025-07-05 23:00:38 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:38.865932 | orchestrator | 2025-07-05 23:00:38 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:38.866170 | orchestrator | 2025-07-05 23:00:38 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:41.902216 | orchestrator | 2025-07-05 23:00:41 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:41.906397 | orchestrator | 2025-07-05 23:00:41 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:41.907929 | orchestrator | 2025-07-05 23:00:41 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:41.908852 | orchestrator | 2025-07-05 23:00:41 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:44.956170 | orchestrator | 2025-07-05 23:00:44 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:44.958529 | orchestrator | 2025-07-05 23:00:44 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:44.960839 | orchestrator | 2025-07-05 23:00:44 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:44.960886 | orchestrator | 2025-07-05 23:00:44 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:48.005919 | orchestrator | 2025-07-05 23:00:48 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:48.006679 | orchestrator | 2025-07-05 23:00:48 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:48.007578 | orchestrator | 2025-07-05 23:00:48 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:48.007602 | orchestrator | 2025-07-05 23:00:48 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:51.062287 | orchestrator | 2025-07-05 23:00:51 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:51.062748 | orchestrator | 2025-07-05 23:00:51 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:51.064655 | orchestrator | 2025-07-05 23:00:51 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:51.064698 | orchestrator | 2025-07-05 23:00:51 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:54.101284 | orchestrator | 2025-07-05 23:00:54 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:54.101910 | orchestrator | 2025-07-05 23:00:54 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:54.103021 | orchestrator | 2025-07-05 23:00:54 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:54.103121 | orchestrator | 2025-07-05 23:00:54 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:00:57.144317 | orchestrator | 2025-07-05 23:00:57 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:00:57.146610 | orchestrator | 2025-07-05 23:00:57 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:00:57.147180 | orchestrator | 2025-07-05 23:00:57 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:00:57.147204 | orchestrator | 2025-07-05 23:00:57 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:00.182471 | orchestrator | 2025-07-05 23:01:00 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:00.184820 | orchestrator | 2025-07-05 23:01:00 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:00.187767 | orchestrator | 2025-07-05 23:01:00 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:00.187827 | orchestrator | 2025-07-05 23:01:00 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:03.220369 | orchestrator | 2025-07-05 23:01:03 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:03.222932 | orchestrator | 2025-07-05 23:01:03 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:03.223937 | orchestrator | 2025-07-05 23:01:03 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:03.224207 | orchestrator | 2025-07-05 23:01:03 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:06.268963 | orchestrator | 2025-07-05 23:01:06 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:06.269679 | orchestrator | 2025-07-05 23:01:06 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:06.270185 | orchestrator | 2025-07-05 23:01:06 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:06.270214 | orchestrator | 2025-07-05 23:01:06 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:09.312109 | orchestrator | 2025-07-05 23:01:09 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:09.312237 | orchestrator | 2025-07-05 23:01:09 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:09.312898 | orchestrator | 2025-07-05 23:01:09 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:09.312942 | orchestrator | 2025-07-05 23:01:09 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:12.366469 | orchestrator | 2025-07-05 23:01:12 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:12.371961 | orchestrator | 2025-07-05 23:01:12 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:12.374671 | orchestrator | 2025-07-05 23:01:12 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:12.374974 | orchestrator | 2025-07-05 23:01:12 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:15.425510 | orchestrator | 2025-07-05 23:01:15 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:15.426408 | orchestrator | 2025-07-05 23:01:15 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:15.427474 | orchestrator | 2025-07-05 23:01:15 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:15.427741 | orchestrator | 2025-07-05 23:01:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:18.472629 | orchestrator | 2025-07-05 23:01:18 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:18.473032 | orchestrator | 2025-07-05 23:01:18 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:18.473707 | orchestrator | 2025-07-05 23:01:18 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:18.473731 | orchestrator | 2025-07-05 23:01:18 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:21.519116 | orchestrator | 2025-07-05 23:01:21 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:21.521675 | orchestrator | 2025-07-05 23:01:21 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:21.524263 | orchestrator | 2025-07-05 23:01:21 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:21.524748 | orchestrator | 2025-07-05 23:01:21 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:24.570314 | orchestrator | 2025-07-05 23:01:24 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:24.570704 | orchestrator | 2025-07-05 23:01:24 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:24.571896 | orchestrator | 2025-07-05 23:01:24 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:24.572156 | orchestrator | 2025-07-05 23:01:24 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:27.616110 | orchestrator | 2025-07-05 23:01:27 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:27.619360 | orchestrator | 2025-07-05 23:01:27 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:27.621716 | orchestrator | 2025-07-05 23:01:27 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:27.621749 | orchestrator | 2025-07-05 23:01:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:30.670122 | orchestrator | 2025-07-05 23:01:30 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:30.670923 | orchestrator | 2025-07-05 23:01:30 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:30.672210 | orchestrator | 2025-07-05 23:01:30 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:30.672235 | orchestrator | 2025-07-05 23:01:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:33.705076 | orchestrator | 2025-07-05 23:01:33 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:33.705942 | orchestrator | 2025-07-05 23:01:33 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:33.707193 | orchestrator | 2025-07-05 23:01:33 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state STARTED 2025-07-05 23:01:33.707771 | orchestrator | 2025-07-05 23:01:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:36.743717 | orchestrator | 2025-07-05 23:01:36 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:36.745338 | orchestrator | 2025-07-05 23:01:36 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:36.748508 | orchestrator | 2025-07-05 23:01:36 | INFO  | Task 43afb44a-e630-41cd-ad83-cac02f69f6fd is in state SUCCESS 2025-07-05 23:01:36.750299 | orchestrator | 2025-07-05 23:01:36.750332 | orchestrator | 2025-07-05 23:01:36.750346 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:01:36.750358 | orchestrator | 2025-07-05 23:01:36.750370 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:01:36.750794 | orchestrator | Saturday 05 July 2025 22:59:03 +0000 (0:00:00.129) 0:00:00.129 ********* 2025-07-05 23:01:36.750818 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.750832 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.750844 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.750855 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:01:36.750867 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:01:36.750879 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:01:36.750891 | orchestrator | 2025-07-05 23:01:36.750903 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:01:36.750915 | orchestrator | Saturday 05 July 2025 22:59:03 +0000 (0:00:00.549) 0:00:00.678 ********* 2025-07-05 23:01:36.750926 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-07-05 23:01:36.750939 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-07-05 23:01:36.750951 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-07-05 23:01:36.750962 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-07-05 23:01:36.750974 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-07-05 23:01:36.750986 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-07-05 23:01:36.750998 | orchestrator | 2025-07-05 23:01:36.751009 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-07-05 23:01:36.751021 | orchestrator | 2025-07-05 23:01:36.751033 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-07-05 23:01:36.751046 | orchestrator | Saturday 05 July 2025 22:59:04 +0000 (0:00:01.010) 0:00:01.689 ********* 2025-07-05 23:01:36.751060 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:01:36.751073 | orchestrator | 2025-07-05 23:01:36.751108 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-07-05 23:01:36.751121 | orchestrator | Saturday 05 July 2025 22:59:05 +0000 (0:00:01.099) 0:00:02.788 ********* 2025-07-05 23:01:36.751135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751225 | orchestrator | 2025-07-05 23:01:36.751251 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-07-05 23:01:36.751263 | orchestrator | Saturday 05 July 2025 22:59:07 +0000 (0:00:01.446) 0:00:04.235 ********* 2025-07-05 23:01:36.751274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751353 | orchestrator | 2025-07-05 23:01:36.751364 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-07-05 23:01:36.751376 | orchestrator | Saturday 05 July 2025 22:59:08 +0000 (0:00:01.668) 0:00:05.904 ********* 2025-07-05 23:01:36.751388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751468 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751480 | orchestrator | 2025-07-05 23:01:36.751491 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-07-05 23:01:36.751503 | orchestrator | Saturday 05 July 2025 22:59:09 +0000 (0:00:01.080) 0:00:06.984 ********* 2025-07-05 23:01:36.751515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751612 | orchestrator | 2025-07-05 23:01:36.751629 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-07-05 23:01:36.751640 | orchestrator | Saturday 05 July 2025 22:59:11 +0000 (0:00:01.809) 0:00:08.794 ********* 2025-07-05 23:01:36.751652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.751727 | orchestrator | 2025-07-05 23:01:36.751738 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-07-05 23:01:36.751749 | orchestrator | Saturday 05 July 2025 22:59:13 +0000 (0:00:01.556) 0:00:10.350 ********* 2025-07-05 23:01:36.751760 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:01:36.751771 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:01:36.751784 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:01:36.751795 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:01:36.751806 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:01:36.751817 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:01:36.751828 | orchestrator | 2025-07-05 23:01:36.751839 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-07-05 23:01:36.751850 | orchestrator | Saturday 05 July 2025 22:59:15 +0000 (0:00:02.387) 0:00:12.738 ********* 2025-07-05 23:01:36.751861 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-07-05 23:01:36.751872 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-07-05 23:01:36.751887 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-07-05 23:01:36.751898 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-07-05 23:01:36.751909 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-07-05 23:01:36.751926 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-07-05 23:01:36.751938 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-05 23:01:36.751949 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-05 23:01:36.751965 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-05 23:01:36.751977 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-05 23:01:36.751988 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-05 23:01:36.751999 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-05 23:01:36.752010 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-05 23:01:36.752022 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-05 23:01:36.752033 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-05 23:01:36.752044 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-05 23:01:36.752055 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-05 23:01:36.752066 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-05 23:01:36.752077 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-05 23:01:36.752089 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-05 23:01:36.752100 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-05 23:01:36.752111 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-05 23:01:36.752122 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-05 23:01:36.752134 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-05 23:01:36.752144 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-05 23:01:36.752156 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-05 23:01:36.752166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-05 23:01:36.752177 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-05 23:01:36.752188 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-05 23:01:36.752199 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-05 23:01:36.752211 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-05 23:01:36.752222 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-05 23:01:36.752233 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-05 23:01:36.752244 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-05 23:01:36.752255 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-05 23:01:36.752272 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-05 23:01:36.752283 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-05 23:01:36.752294 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-05 23:01:36.752305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-05 23:01:36.752316 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-05 23:01:36.752332 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-05 23:01:36.752344 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-05 23:01:36.752355 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-07-05 23:01:36.752366 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-07-05 23:01:36.752383 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-07-05 23:01:36.752395 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-07-05 23:01:36.752406 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-07-05 23:01:36.752417 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-07-05 23:01:36.752428 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-05 23:01:36.752439 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-05 23:01:36.752450 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-05 23:01:36.752461 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-05 23:01:36.752472 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-05 23:01:36.752483 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-05 23:01:36.752494 | orchestrator | 2025-07-05 23:01:36.752505 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-05 23:01:36.752516 | orchestrator | Saturday 05 July 2025 22:59:34 +0000 (0:00:18.810) 0:00:31.549 ********* 2025-07-05 23:01:36.752527 | orchestrator | 2025-07-05 23:01:36.752574 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-05 23:01:36.752586 | orchestrator | Saturday 05 July 2025 22:59:34 +0000 (0:00:00.085) 0:00:31.634 ********* 2025-07-05 23:01:36.752597 | orchestrator | 2025-07-05 23:01:36.752608 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-05 23:01:36.752619 | orchestrator | Saturday 05 July 2025 22:59:34 +0000 (0:00:00.068) 0:00:31.703 ********* 2025-07-05 23:01:36.752630 | orchestrator | 2025-07-05 23:01:36.752641 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-05 23:01:36.752652 | orchestrator | Saturday 05 July 2025 22:59:34 +0000 (0:00:00.059) 0:00:31.762 ********* 2025-07-05 23:01:36.752664 | orchestrator | 2025-07-05 23:01:36.752675 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-05 23:01:36.752693 | orchestrator | Saturday 05 July 2025 22:59:34 +0000 (0:00:00.061) 0:00:31.824 ********* 2025-07-05 23:01:36.752704 | orchestrator | 2025-07-05 23:01:36.752715 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-05 23:01:36.752726 | orchestrator | Saturday 05 July 2025 22:59:34 +0000 (0:00:00.060) 0:00:31.885 ********* 2025-07-05 23:01:36.752737 | orchestrator | 2025-07-05 23:01:36.752749 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-07-05 23:01:36.752760 | orchestrator | Saturday 05 July 2025 22:59:34 +0000 (0:00:00.061) 0:00:31.946 ********* 2025-07-05 23:01:36.752771 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.752782 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.752793 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.752804 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:01:36.752816 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:01:36.752827 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:01:36.752838 | orchestrator | 2025-07-05 23:01:36.752849 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-07-05 23:01:36.752865 | orchestrator | Saturday 05 July 2025 22:59:36 +0000 (0:00:01.785) 0:00:33.732 ********* 2025-07-05 23:01:36.752885 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:01:36.752905 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:01:36.752925 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:01:36.752943 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:01:36.752962 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:01:36.752980 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:01:36.752999 | orchestrator | 2025-07-05 23:01:36.753018 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-07-05 23:01:36.753037 | orchestrator | 2025-07-05 23:01:36.753057 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-05 23:01:36.753075 | orchestrator | Saturday 05 July 2025 23:00:13 +0000 (0:00:36.370) 0:01:10.103 ********* 2025-07-05 23:01:36.753091 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:01:36.753103 | orchestrator | 2025-07-05 23:01:36.753114 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-05 23:01:36.753125 | orchestrator | Saturday 05 July 2025 23:00:13 +0000 (0:00:00.545) 0:01:10.648 ********* 2025-07-05 23:01:36.753136 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:01:36.753147 | orchestrator | 2025-07-05 23:01:36.753158 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-07-05 23:01:36.753169 | orchestrator | Saturday 05 July 2025 23:00:14 +0000 (0:00:00.825) 0:01:11.474 ********* 2025-07-05 23:01:36.753180 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.753269 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.753293 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.753304 | orchestrator | 2025-07-05 23:01:36.753316 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-07-05 23:01:36.753327 | orchestrator | Saturday 05 July 2025 23:00:15 +0000 (0:00:00.838) 0:01:12.313 ********* 2025-07-05 23:01:36.753339 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.753350 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.753361 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.753381 | orchestrator | 2025-07-05 23:01:36.753393 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-07-05 23:01:36.753404 | orchestrator | Saturday 05 July 2025 23:00:15 +0000 (0:00:00.350) 0:01:12.663 ********* 2025-07-05 23:01:36.753415 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.753426 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.753437 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.753448 | orchestrator | 2025-07-05 23:01:36.753459 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-07-05 23:01:36.753479 | orchestrator | Saturday 05 July 2025 23:00:15 +0000 (0:00:00.342) 0:01:13.006 ********* 2025-07-05 23:01:36.753490 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.753501 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.753512 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.753523 | orchestrator | 2025-07-05 23:01:36.753627 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-07-05 23:01:36.753642 | orchestrator | Saturday 05 July 2025 23:00:16 +0000 (0:00:00.610) 0:01:13.616 ********* 2025-07-05 23:01:36.753653 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.753664 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.753675 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.753685 | orchestrator | 2025-07-05 23:01:36.753696 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-07-05 23:01:36.753707 | orchestrator | Saturday 05 July 2025 23:00:16 +0000 (0:00:00.395) 0:01:14.011 ********* 2025-07-05 23:01:36.753718 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.753730 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.753741 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.753751 | orchestrator | 2025-07-05 23:01:36.753762 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-07-05 23:01:36.753773 | orchestrator | Saturday 05 July 2025 23:00:17 +0000 (0:00:00.346) 0:01:14.358 ********* 2025-07-05 23:01:36.753784 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.753795 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.753806 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.753817 | orchestrator | 2025-07-05 23:01:36.753828 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-07-05 23:01:36.753839 | orchestrator | Saturday 05 July 2025 23:00:17 +0000 (0:00:00.324) 0:01:14.682 ********* 2025-07-05 23:01:36.753850 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.753861 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.753872 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.753883 | orchestrator | 2025-07-05 23:01:36.753894 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-07-05 23:01:36.753905 | orchestrator | Saturday 05 July 2025 23:00:18 +0000 (0:00:00.515) 0:01:15.198 ********* 2025-07-05 23:01:36.753916 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.753927 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.753938 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.753949 | orchestrator | 2025-07-05 23:01:36.753960 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-07-05 23:01:36.753971 | orchestrator | Saturday 05 July 2025 23:00:18 +0000 (0:00:00.348) 0:01:15.546 ********* 2025-07-05 23:01:36.753982 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.753993 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.754004 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.754107 | orchestrator | 2025-07-05 23:01:36.754124 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-07-05 23:01:36.754136 | orchestrator | Saturday 05 July 2025 23:00:18 +0000 (0:00:00.307) 0:01:15.854 ********* 2025-07-05 23:01:36.754147 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.754158 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.754169 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.754180 | orchestrator | 2025-07-05 23:01:36.754191 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-07-05 23:01:36.754202 | orchestrator | Saturday 05 July 2025 23:00:19 +0000 (0:00:00.295) 0:01:16.150 ********* 2025-07-05 23:01:36.754213 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.754224 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.754235 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.754245 | orchestrator | 2025-07-05 23:01:36.754257 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-07-05 23:01:36.754268 | orchestrator | Saturday 05 July 2025 23:00:19 +0000 (0:00:00.487) 0:01:16.637 ********* 2025-07-05 23:01:36.754288 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.754299 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.754310 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.754321 | orchestrator | 2025-07-05 23:01:36.754332 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-07-05 23:01:36.754343 | orchestrator | Saturday 05 July 2025 23:00:19 +0000 (0:00:00.449) 0:01:17.088 ********* 2025-07-05 23:01:36.754354 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.754365 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.754376 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.754387 | orchestrator | 2025-07-05 23:01:36.754398 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-07-05 23:01:36.754409 | orchestrator | Saturday 05 July 2025 23:00:20 +0000 (0:00:00.351) 0:01:17.439 ********* 2025-07-05 23:01:36.754420 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.754437 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.754448 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.754459 | orchestrator | 2025-07-05 23:01:36.754470 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-07-05 23:01:36.754481 | orchestrator | Saturday 05 July 2025 23:00:20 +0000 (0:00:00.470) 0:01:17.910 ********* 2025-07-05 23:01:36.754492 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.754503 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.754515 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.754526 | orchestrator | 2025-07-05 23:01:36.754600 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-07-05 23:01:36.754612 | orchestrator | Saturday 05 July 2025 23:00:21 +0000 (0:00:00.605) 0:01:18.515 ********* 2025-07-05 23:01:36.754623 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.754635 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.754656 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.754667 | orchestrator | 2025-07-05 23:01:36.754678 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-05 23:01:36.754689 | orchestrator | Saturday 05 July 2025 23:00:21 +0000 (0:00:00.302) 0:01:18.817 ********* 2025-07-05 23:01:36.754701 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:01:36.754712 | orchestrator | 2025-07-05 23:01:36.754723 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-07-05 23:01:36.754734 | orchestrator | Saturday 05 July 2025 23:00:22 +0000 (0:00:00.591) 0:01:19.409 ********* 2025-07-05 23:01:36.754745 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.754756 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.754767 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.754778 | orchestrator | 2025-07-05 23:01:36.754789 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-07-05 23:01:36.754800 | orchestrator | Saturday 05 July 2025 23:00:23 +0000 (0:00:00.821) 0:01:20.230 ********* 2025-07-05 23:01:36.754811 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.754822 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.754833 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.754843 | orchestrator | 2025-07-05 23:01:36.754854 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-07-05 23:01:36.754865 | orchestrator | Saturday 05 July 2025 23:00:23 +0000 (0:00:00.538) 0:01:20.769 ********* 2025-07-05 23:01:36.754876 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.754888 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.754899 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.754910 | orchestrator | 2025-07-05 23:01:36.754921 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-07-05 23:01:36.754932 | orchestrator | Saturday 05 July 2025 23:00:24 +0000 (0:00:00.340) 0:01:21.110 ********* 2025-07-05 23:01:36.754943 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.754954 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.754972 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.754983 | orchestrator | 2025-07-05 23:01:36.754994 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-07-05 23:01:36.755005 | orchestrator | Saturday 05 July 2025 23:00:24 +0000 (0:00:00.339) 0:01:21.450 ********* 2025-07-05 23:01:36.755016 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.755027 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.755038 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.755049 | orchestrator | 2025-07-05 23:01:36.755060 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-07-05 23:01:36.755071 | orchestrator | Saturday 05 July 2025 23:00:24 +0000 (0:00:00.613) 0:01:22.063 ********* 2025-07-05 23:01:36.755082 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.755093 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.755104 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.755115 | orchestrator | 2025-07-05 23:01:36.755126 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-07-05 23:01:36.755138 | orchestrator | Saturday 05 July 2025 23:00:25 +0000 (0:00:00.378) 0:01:22.442 ********* 2025-07-05 23:01:36.755149 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.755160 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.755171 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.755182 | orchestrator | 2025-07-05 23:01:36.755193 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-07-05 23:01:36.755204 | orchestrator | Saturday 05 July 2025 23:00:25 +0000 (0:00:00.424) 0:01:22.867 ********* 2025-07-05 23:01:36.755215 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.755226 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.755237 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.755248 | orchestrator | 2025-07-05 23:01:36.755259 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-05 23:01:36.755270 | orchestrator | Saturday 05 July 2025 23:00:26 +0000 (0:00:00.375) 0:01:23.242 ********* 2025-07-05 23:01:36.755282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755456 | orchestrator | 2025-07-05 23:01:36.755467 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-05 23:01:36.755479 | orchestrator | Saturday 05 July 2025 23:00:27 +0000 (0:00:01.757) 0:01:24.999 ********* 2025-07-05 23:01:36.755490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755634 | orchestrator | 2025-07-05 23:01:36.755645 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-05 23:01:36.755657 | orchestrator | Saturday 05 July 2025 23:00:32 +0000 (0:00:04.723) 0:01:29.722 ********* 2025-07-05 23:01:36.755668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.755791 | orchestrator | 2025-07-05 23:01:36.755802 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-05 23:01:36.755813 | orchestrator | Saturday 05 July 2025 23:00:34 +0000 (0:00:02.080) 0:01:31.803 ********* 2025-07-05 23:01:36.755824 | orchestrator | 2025-07-05 23:01:36.755835 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-05 23:01:36.755847 | orchestrator | Saturday 05 July 2025 23:00:34 +0000 (0:00:00.065) 0:01:31.868 ********* 2025-07-05 23:01:36.755858 | orchestrator | 2025-07-05 23:01:36.755869 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-05 23:01:36.755880 | orchestrator | Saturday 05 July 2025 23:00:34 +0000 (0:00:00.082) 0:01:31.950 ********* 2025-07-05 23:01:36.755891 | orchestrator | 2025-07-05 23:01:36.755902 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-05 23:01:36.755913 | orchestrator | Saturday 05 July 2025 23:00:34 +0000 (0:00:00.070) 0:01:32.021 ********* 2025-07-05 23:01:36.755924 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:01:36.755935 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:01:36.755946 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:01:36.755958 | orchestrator | 2025-07-05 23:01:36.755969 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-05 23:01:36.755980 | orchestrator | Saturday 05 July 2025 23:00:42 +0000 (0:00:07.473) 0:01:39.494 ********* 2025-07-05 23:01:36.755991 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:01:36.756002 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:01:36.756013 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:01:36.756024 | orchestrator | 2025-07-05 23:01:36.756035 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-05 23:01:36.756046 | orchestrator | Saturday 05 July 2025 23:00:49 +0000 (0:00:06.833) 0:01:46.328 ********* 2025-07-05 23:01:36.756057 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:01:36.756068 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:01:36.756079 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:01:36.756090 | orchestrator | 2025-07-05 23:01:36.756102 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-05 23:01:36.756113 | orchestrator | Saturday 05 July 2025 23:00:57 +0000 (0:00:07.985) 0:01:54.313 ********* 2025-07-05 23:01:36.756130 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.756141 | orchestrator | 2025-07-05 23:01:36.756152 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-05 23:01:36.756163 | orchestrator | Saturday 05 July 2025 23:00:57 +0000 (0:00:00.110) 0:01:54.424 ********* 2025-07-05 23:01:36.756174 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.756185 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.756196 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.756207 | orchestrator | 2025-07-05 23:01:36.756218 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-05 23:01:36.756230 | orchestrator | Saturday 05 July 2025 23:00:58 +0000 (0:00:00.767) 0:01:55.192 ********* 2025-07-05 23:01:36.756241 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.756252 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.756263 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:01:36.756274 | orchestrator | 2025-07-05 23:01:36.756285 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-05 23:01:36.756296 | orchestrator | Saturday 05 July 2025 23:00:58 +0000 (0:00:00.863) 0:01:56.055 ********* 2025-07-05 23:01:36.756312 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.756323 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.756334 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.756345 | orchestrator | 2025-07-05 23:01:36.756356 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-05 23:01:36.756368 | orchestrator | Saturday 05 July 2025 23:00:59 +0000 (0:00:00.800) 0:01:56.855 ********* 2025-07-05 23:01:36.756379 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.756389 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.756400 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:01:36.756411 | orchestrator | 2025-07-05 23:01:36.756422 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-05 23:01:36.756434 | orchestrator | Saturday 05 July 2025 23:01:00 +0000 (0:00:00.648) 0:01:57.504 ********* 2025-07-05 23:01:36.756445 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.756456 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.756472 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.756484 | orchestrator | 2025-07-05 23:01:36.756495 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-05 23:01:36.756506 | orchestrator | Saturday 05 July 2025 23:01:01 +0000 (0:00:00.733) 0:01:58.238 ********* 2025-07-05 23:01:36.756517 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.756528 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.756589 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.756601 | orchestrator | 2025-07-05 23:01:36.756612 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-07-05 23:01:36.756623 | orchestrator | Saturday 05 July 2025 23:01:02 +0000 (0:00:01.131) 0:01:59.369 ********* 2025-07-05 23:01:36.756634 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.756645 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.756656 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.756668 | orchestrator | 2025-07-05 23:01:36.756679 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-05 23:01:36.756690 | orchestrator | Saturday 05 July 2025 23:01:02 +0000 (0:00:00.318) 0:01:59.688 ********* 2025-07-05 23:01:36.756701 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756713 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756732 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756744 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756757 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756769 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756780 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756801 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756825 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756837 | orchestrator | 2025-07-05 23:01:36.756849 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-05 23:01:36.756860 | orchestrator | Saturday 05 July 2025 23:01:04 +0000 (0:00:01.530) 0:02:01.218 ********* 2025-07-05 23:01:36.756871 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756883 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756895 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756912 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756947 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.756987 | orchestrator | 2025-07-05 23:01:36.756998 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-05 23:01:36.757009 | orchestrator | Saturday 05 July 2025 23:01:08 +0000 (0:00:04.170) 0:02:05.389 ********* 2025-07-05 23:01:36.757027 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.757039 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.757050 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.757068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.757080 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.757091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.757102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.757112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.757122 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:01:36.757132 | orchestrator | 2025-07-05 23:01:36.757142 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-05 23:01:36.757152 | orchestrator | Saturday 05 July 2025 23:01:11 +0000 (0:00:03.255) 0:02:08.645 ********* 2025-07-05 23:01:36.757162 | orchestrator | 2025-07-05 23:01:36.757177 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-05 23:01:36.757187 | orchestrator | Saturday 05 July 2025 23:01:11 +0000 (0:00:00.068) 0:02:08.714 ********* 2025-07-05 23:01:36.757197 | orchestrator | 2025-07-05 23:01:36.757207 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-05 23:01:36.757217 | orchestrator | Saturday 05 July 2025 23:01:11 +0000 (0:00:00.065) 0:02:08.779 ********* 2025-07-05 23:01:36.757226 | orchestrator | 2025-07-05 23:01:36.757236 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-05 23:01:36.757246 | orchestrator | Saturday 05 July 2025 23:01:11 +0000 (0:00:00.064) 0:02:08.844 ********* 2025-07-05 23:01:36.757256 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:01:36.757266 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:01:36.757276 | orchestrator | 2025-07-05 23:01:36.757291 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-05 23:01:36.757307 | orchestrator | Saturday 05 July 2025 23:01:17 +0000 (0:00:06.191) 0:02:15.036 ********* 2025-07-05 23:01:36.757317 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:01:36.757327 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:01:36.757337 | orchestrator | 2025-07-05 23:01:36.757347 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-05 23:01:36.757357 | orchestrator | Saturday 05 July 2025 23:01:24 +0000 (0:00:06.195) 0:02:21.232 ********* 2025-07-05 23:01:36.757367 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:01:36.757377 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:01:36.757387 | orchestrator | 2025-07-05 23:01:36.757397 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-05 23:01:36.757407 | orchestrator | Saturday 05 July 2025 23:01:30 +0000 (0:00:06.195) 0:02:27.427 ********* 2025-07-05 23:01:36.757417 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:01:36.757426 | orchestrator | 2025-07-05 23:01:36.757436 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-05 23:01:36.757446 | orchestrator | Saturday 05 July 2025 23:01:30 +0000 (0:00:00.128) 0:02:27.556 ********* 2025-07-05 23:01:36.757456 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.757466 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.757476 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.757486 | orchestrator | 2025-07-05 23:01:36.757495 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-05 23:01:36.757506 | orchestrator | Saturday 05 July 2025 23:01:31 +0000 (0:00:01.045) 0:02:28.601 ********* 2025-07-05 23:01:36.757515 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.757525 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.757551 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:01:36.757561 | orchestrator | 2025-07-05 23:01:36.757571 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-05 23:01:36.757580 | orchestrator | Saturday 05 July 2025 23:01:32 +0000 (0:00:00.630) 0:02:29.232 ********* 2025-07-05 23:01:36.757590 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.757600 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.757610 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.757620 | orchestrator | 2025-07-05 23:01:36.757630 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-05 23:01:36.757640 | orchestrator | Saturday 05 July 2025 23:01:32 +0000 (0:00:00.805) 0:02:30.038 ********* 2025-07-05 23:01:36.757649 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:01:36.757659 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:01:36.757669 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:01:36.757679 | orchestrator | 2025-07-05 23:01:36.757688 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-05 23:01:36.757698 | orchestrator | Saturday 05 July 2025 23:01:33 +0000 (0:00:00.610) 0:02:30.648 ********* 2025-07-05 23:01:36.757708 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.757718 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.757728 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.757737 | orchestrator | 2025-07-05 23:01:36.757747 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-05 23:01:36.757757 | orchestrator | Saturday 05 July 2025 23:01:34 +0000 (0:00:00.866) 0:02:31.515 ********* 2025-07-05 23:01:36.757767 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:01:36.757777 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:01:36.757786 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:01:36.757796 | orchestrator | 2025-07-05 23:01:36.757806 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:01:36.757816 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-05 23:01:36.757827 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-05 23:01:36.757842 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-05 23:01:36.757852 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:01:36.757862 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:01:36.757872 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:01:36.757882 | orchestrator | 2025-07-05 23:01:36.757892 | orchestrator | 2025-07-05 23:01:36.757902 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:01:36.757912 | orchestrator | Saturday 05 July 2025 23:01:35 +0000 (0:00:00.999) 0:02:32.514 ********* 2025-07-05 23:01:36.757921 | orchestrator | =============================================================================== 2025-07-05 23:01:36.757935 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 36.37s 2025-07-05 23:01:36.757945 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.81s 2025-07-05 23:01:36.757955 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.18s 2025-07-05 23:01:36.757965 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.67s 2025-07-05 23:01:36.757975 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.03s 2025-07-05 23:01:36.757984 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.72s 2025-07-05 23:01:36.757994 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.17s 2025-07-05 23:01:36.758009 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.26s 2025-07-05 23:01:36.758055 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.39s 2025-07-05 23:01:36.758065 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.08s 2025-07-05 23:01:36.758076 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.81s 2025-07-05 23:01:36.758085 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.79s 2025-07-05 23:01:36.758095 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.76s 2025-07-05 23:01:36.758105 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.67s 2025-07-05 23:01:36.758115 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.56s 2025-07-05 23:01:36.758125 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2025-07-05 23:01:36.758134 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.45s 2025-07-05 23:01:36.758144 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.13s 2025-07-05 23:01:36.758154 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.10s 2025-07-05 23:01:36.758164 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.08s 2025-07-05 23:01:36.758174 | orchestrator | 2025-07-05 23:01:36 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:39.798207 | orchestrator | 2025-07-05 23:01:39 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:39.798308 | orchestrator | 2025-07-05 23:01:39 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:39.798322 | orchestrator | 2025-07-05 23:01:39 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:42.845376 | orchestrator | 2025-07-05 23:01:42 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:42.847349 | orchestrator | 2025-07-05 23:01:42 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:42.847414 | orchestrator | 2025-07-05 23:01:42 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:45.895681 | orchestrator | 2025-07-05 23:01:45 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:45.897612 | orchestrator | 2025-07-05 23:01:45 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:45.897655 | orchestrator | 2025-07-05 23:01:45 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:48.942848 | orchestrator | 2025-07-05 23:01:48 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:48.944429 | orchestrator | 2025-07-05 23:01:48 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:48.944452 | orchestrator | 2025-07-05 23:01:48 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:51.990449 | orchestrator | 2025-07-05 23:01:51 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:51.991507 | orchestrator | 2025-07-05 23:01:51 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:51.991637 | orchestrator | 2025-07-05 23:01:51 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:55.031216 | orchestrator | 2025-07-05 23:01:55 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:55.031317 | orchestrator | 2025-07-05 23:01:55 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:55.031331 | orchestrator | 2025-07-05 23:01:55 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:01:58.068335 | orchestrator | 2025-07-05 23:01:58 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:01:58.069156 | orchestrator | 2025-07-05 23:01:58 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:01:58.069262 | orchestrator | 2025-07-05 23:01:58 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:01.104227 | orchestrator | 2025-07-05 23:02:01 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:01.104665 | orchestrator | 2025-07-05 23:02:01 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:01.104697 | orchestrator | 2025-07-05 23:02:01 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:04.145299 | orchestrator | 2025-07-05 23:02:04 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:04.146340 | orchestrator | 2025-07-05 23:02:04 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:04.146465 | orchestrator | 2025-07-05 23:02:04 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:07.188033 | orchestrator | 2025-07-05 23:02:07 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:07.191583 | orchestrator | 2025-07-05 23:02:07 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:07.192161 | orchestrator | 2025-07-05 23:02:07 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:10.240766 | orchestrator | 2025-07-05 23:02:10 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:10.240886 | orchestrator | 2025-07-05 23:02:10 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:10.240908 | orchestrator | 2025-07-05 23:02:10 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:13.298073 | orchestrator | 2025-07-05 23:02:13 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:13.299831 | orchestrator | 2025-07-05 23:02:13 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:13.300207 | orchestrator | 2025-07-05 23:02:13 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:16.341683 | orchestrator | 2025-07-05 23:02:16 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:16.343580 | orchestrator | 2025-07-05 23:02:16 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:16.343631 | orchestrator | 2025-07-05 23:02:16 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:19.388645 | orchestrator | 2025-07-05 23:02:19 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:19.389864 | orchestrator | 2025-07-05 23:02:19 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:19.390406 | orchestrator | 2025-07-05 23:02:19 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:22.438594 | orchestrator | 2025-07-05 23:02:22 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:22.440892 | orchestrator | 2025-07-05 23:02:22 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:22.440958 | orchestrator | 2025-07-05 23:02:22 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:25.480413 | orchestrator | 2025-07-05 23:02:25 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:25.481169 | orchestrator | 2025-07-05 23:02:25 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:25.481183 | orchestrator | 2025-07-05 23:02:25 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:28.519930 | orchestrator | 2025-07-05 23:02:28 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:28.521420 | orchestrator | 2025-07-05 23:02:28 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:28.521459 | orchestrator | 2025-07-05 23:02:28 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:31.574130 | orchestrator | 2025-07-05 23:02:31 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:31.574234 | orchestrator | 2025-07-05 23:02:31 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:31.574251 | orchestrator | 2025-07-05 23:02:31 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:34.620833 | orchestrator | 2025-07-05 23:02:34 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:34.622984 | orchestrator | 2025-07-05 23:02:34 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:34.623446 | orchestrator | 2025-07-05 23:02:34 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:37.654620 | orchestrator | 2025-07-05 23:02:37 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:37.655254 | orchestrator | 2025-07-05 23:02:37 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:37.655563 | orchestrator | 2025-07-05 23:02:37 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:40.703627 | orchestrator | 2025-07-05 23:02:40 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:40.705089 | orchestrator | 2025-07-05 23:02:40 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:40.705121 | orchestrator | 2025-07-05 23:02:40 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:43.744615 | orchestrator | 2025-07-05 23:02:43 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:43.745978 | orchestrator | 2025-07-05 23:02:43 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:43.746144 | orchestrator | 2025-07-05 23:02:43 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:46.793702 | orchestrator | 2025-07-05 23:02:46 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:46.794848 | orchestrator | 2025-07-05 23:02:46 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:46.794882 | orchestrator | 2025-07-05 23:02:46 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:49.844375 | orchestrator | 2025-07-05 23:02:49 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:49.846062 | orchestrator | 2025-07-05 23:02:49 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:49.846101 | orchestrator | 2025-07-05 23:02:49 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:52.884887 | orchestrator | 2025-07-05 23:02:52 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:52.888299 | orchestrator | 2025-07-05 23:02:52 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:52.888356 | orchestrator | 2025-07-05 23:02:52 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:55.931441 | orchestrator | 2025-07-05 23:02:55 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:55.932497 | orchestrator | 2025-07-05 23:02:55 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:55.933462 | orchestrator | 2025-07-05 23:02:55 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:02:58.978854 | orchestrator | 2025-07-05 23:02:58 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:02:58.979905 | orchestrator | 2025-07-05 23:02:58 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:02:58.979942 | orchestrator | 2025-07-05 23:02:58 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:02.043045 | orchestrator | 2025-07-05 23:03:02 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:02.043146 | orchestrator | 2025-07-05 23:03:02 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:02.043162 | orchestrator | 2025-07-05 23:03:02 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:05.084811 | orchestrator | 2025-07-05 23:03:05 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:05.086360 | orchestrator | 2025-07-05 23:03:05 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:05.086426 | orchestrator | 2025-07-05 23:03:05 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:08.127933 | orchestrator | 2025-07-05 23:03:08 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:08.129346 | orchestrator | 2025-07-05 23:03:08 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:08.129653 | orchestrator | 2025-07-05 23:03:08 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:11.187518 | orchestrator | 2025-07-05 23:03:11 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:11.187731 | orchestrator | 2025-07-05 23:03:11 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:11.187780 | orchestrator | 2025-07-05 23:03:11 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:14.230100 | orchestrator | 2025-07-05 23:03:14 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:14.233792 | orchestrator | 2025-07-05 23:03:14 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:14.233852 | orchestrator | 2025-07-05 23:03:14 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:17.264833 | orchestrator | 2025-07-05 23:03:17 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:17.266704 | orchestrator | 2025-07-05 23:03:17 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:17.266741 | orchestrator | 2025-07-05 23:03:17 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:20.313030 | orchestrator | 2025-07-05 23:03:20 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:20.315370 | orchestrator | 2025-07-05 23:03:20 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:20.315688 | orchestrator | 2025-07-05 23:03:20 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:23.367866 | orchestrator | 2025-07-05 23:03:23 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:23.369950 | orchestrator | 2025-07-05 23:03:23 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:23.370364 | orchestrator | 2025-07-05 23:03:23 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:26.415101 | orchestrator | 2025-07-05 23:03:26 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:26.416378 | orchestrator | 2025-07-05 23:03:26 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:26.416407 | orchestrator | 2025-07-05 23:03:26 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:29.454705 | orchestrator | 2025-07-05 23:03:29 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:29.454803 | orchestrator | 2025-07-05 23:03:29 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:29.454817 | orchestrator | 2025-07-05 23:03:29 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:32.495113 | orchestrator | 2025-07-05 23:03:32 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:32.496155 | orchestrator | 2025-07-05 23:03:32 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:32.496185 | orchestrator | 2025-07-05 23:03:32 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:35.534931 | orchestrator | 2025-07-05 23:03:35 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:35.535031 | orchestrator | 2025-07-05 23:03:35 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:35.535046 | orchestrator | 2025-07-05 23:03:35 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:38.575781 | orchestrator | 2025-07-05 23:03:38 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:38.576511 | orchestrator | 2025-07-05 23:03:38 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:38.576881 | orchestrator | 2025-07-05 23:03:38 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:41.619079 | orchestrator | 2025-07-05 23:03:41 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:41.621894 | orchestrator | 2025-07-05 23:03:41 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:41.622155 | orchestrator | 2025-07-05 23:03:41 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:44.673702 | orchestrator | 2025-07-05 23:03:44 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:44.675492 | orchestrator | 2025-07-05 23:03:44 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:44.676117 | orchestrator | 2025-07-05 23:03:44 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:47.726468 | orchestrator | 2025-07-05 23:03:47 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:47.726568 | orchestrator | 2025-07-05 23:03:47 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:47.726583 | orchestrator | 2025-07-05 23:03:47 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:50.769544 | orchestrator | 2025-07-05 23:03:50 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:50.773551 | orchestrator | 2025-07-05 23:03:50 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:50.773640 | orchestrator | 2025-07-05 23:03:50 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:53.826095 | orchestrator | 2025-07-05 23:03:53 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:53.826197 | orchestrator | 2025-07-05 23:03:53 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:53.826212 | orchestrator | 2025-07-05 23:03:53 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:56.868567 | orchestrator | 2025-07-05 23:03:56 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:56.868754 | orchestrator | 2025-07-05 23:03:56 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:56.868775 | orchestrator | 2025-07-05 23:03:56 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:03:59.917769 | orchestrator | 2025-07-05 23:03:59 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:03:59.920445 | orchestrator | 2025-07-05 23:03:59 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:03:59.920491 | orchestrator | 2025-07-05 23:03:59 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:02.961914 | orchestrator | 2025-07-05 23:04:02 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:02.963725 | orchestrator | 2025-07-05 23:04:02 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state STARTED 2025-07-05 23:04:02.963761 | orchestrator | 2025-07-05 23:04:02 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:06.023304 | orchestrator | 2025-07-05 23:04:06 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:06.023977 | orchestrator | 2025-07-05 23:04:06 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:06.034096 | orchestrator | 2025-07-05 23:04:06.034155 | orchestrator | 2025-07-05 23:04:06.034169 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:04:06.034182 | orchestrator | 2025-07-05 23:04:06.034194 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:04:06.034205 | orchestrator | Saturday 05 July 2025 22:57:55 +0000 (0:00:00.429) 0:00:00.429 ********* 2025-07-05 23:04:06.034217 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.034255 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.034267 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.034278 | orchestrator | 2025-07-05 23:04:06.034290 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:04:06.034301 | orchestrator | Saturday 05 July 2025 22:57:55 +0000 (0:00:00.433) 0:00:00.864 ********* 2025-07-05 23:04:06.034313 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-07-05 23:04:06.034324 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-07-05 23:04:06.034335 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-07-05 23:04:06.034346 | orchestrator | 2025-07-05 23:04:06.034358 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-07-05 23:04:06.034368 | orchestrator | 2025-07-05 23:04:06.034475 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-05 23:04:06.034584 | orchestrator | Saturday 05 July 2025 22:57:56 +0000 (0:00:00.832) 0:00:01.696 ********* 2025-07-05 23:04:06.034627 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.034642 | orchestrator | 2025-07-05 23:04:06.034947 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-07-05 23:04:06.034966 | orchestrator | Saturday 05 July 2025 22:57:57 +0000 (0:00:00.944) 0:00:02.641 ********* 2025-07-05 23:04:06.034980 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.034993 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.035012 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.035030 | orchestrator | 2025-07-05 23:04:06.035047 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-05 23:04:06.035064 | orchestrator | Saturday 05 July 2025 22:57:58 +0000 (0:00:00.777) 0:00:03.419 ********* 2025-07-05 23:04:06.035080 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.035097 | orchestrator | 2025-07-05 23:04:06.038198 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-07-05 23:04:06.038315 | orchestrator | Saturday 05 July 2025 22:57:59 +0000 (0:00:01.118) 0:00:04.538 ********* 2025-07-05 23:04:06.038332 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.038346 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.038357 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.038368 | orchestrator | 2025-07-05 23:04:06.038380 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-07-05 23:04:06.038392 | orchestrator | Saturday 05 July 2025 22:58:00 +0000 (0:00:01.615) 0:00:06.154 ********* 2025-07-05 23:04:06.038403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-05 23:04:06.038415 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-05 23:04:06.038426 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-05 23:04:06.038437 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-05 23:04:06.038486 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-05 23:04:06.038498 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-05 23:04:06.038509 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-05 23:04:06.038522 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-05 23:04:06.038533 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-05 23:04:06.038544 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-05 23:04:06.038556 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-05 23:04:06.038567 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-05 23:04:06.038632 | orchestrator | 2025-07-05 23:04:06.038655 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-05 23:04:06.038677 | orchestrator | Saturday 05 July 2025 22:58:02 +0000 (0:00:02.127) 0:00:08.281 ********* 2025-07-05 23:04:06.038696 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-05 23:04:06.038708 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-05 23:04:06.038719 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-05 23:04:06.038731 | orchestrator | 2025-07-05 23:04:06.038742 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-05 23:04:06.038753 | orchestrator | Saturday 05 July 2025 22:58:03 +0000 (0:00:00.775) 0:00:09.057 ********* 2025-07-05 23:04:06.038764 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-05 23:04:06.038775 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-05 23:04:06.038786 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-05 23:04:06.038797 | orchestrator | 2025-07-05 23:04:06.038808 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-05 23:04:06.038819 | orchestrator | Saturday 05 July 2025 22:58:04 +0000 (0:00:01.222) 0:00:10.279 ********* 2025-07-05 23:04:06.038830 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-07-05 23:04:06.038841 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.038875 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-07-05 23:04:06.038887 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.038898 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-07-05 23:04:06.038910 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.038921 | orchestrator | 2025-07-05 23:04:06.038932 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-07-05 23:04:06.038943 | orchestrator | Saturday 05 July 2025 22:58:06 +0000 (0:00:01.200) 0:00:11.480 ********* 2025-07-05 23:04:06.038958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.038978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.038990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.039003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.039025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.039047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.039091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.039104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.039115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.039127 | orchestrator | 2025-07-05 23:04:06.039139 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-07-05 23:04:06.039150 | orchestrator | Saturday 05 July 2025 22:58:07 +0000 (0:00:01.904) 0:00:13.384 ********* 2025-07-05 23:04:06.039162 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.039173 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.039184 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.039195 | orchestrator | 2025-07-05 23:04:06.039206 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-07-05 23:04:06.039224 | orchestrator | Saturday 05 July 2025 22:58:09 +0000 (0:00:01.125) 0:00:14.510 ********* 2025-07-05 23:04:06.039235 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-07-05 23:04:06.039246 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-07-05 23:04:06.039257 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-07-05 23:04:06.039269 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-07-05 23:04:06.039280 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-07-05 23:04:06.039291 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-07-05 23:04:06.039302 | orchestrator | 2025-07-05 23:04:06.039313 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-07-05 23:04:06.039329 | orchestrator | Saturday 05 July 2025 22:58:11 +0000 (0:00:01.925) 0:00:16.436 ********* 2025-07-05 23:04:06.039340 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.039351 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.039362 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.039373 | orchestrator | 2025-07-05 23:04:06.039384 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-07-05 23:04:06.039396 | orchestrator | Saturday 05 July 2025 22:58:14 +0000 (0:00:03.033) 0:00:19.470 ********* 2025-07-05 23:04:06.039407 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.039419 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.039429 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.039440 | orchestrator | 2025-07-05 23:04:06.039451 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-07-05 23:04:06.039463 | orchestrator | Saturday 05 July 2025 22:58:15 +0000 (0:00:01.877) 0:00:21.347 ********* 2025-07-05 23:04:06.039474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.039511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.039524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.039537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-05 23:04:06.039556 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.039569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.039585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.039632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.039647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-05 23:04:06.039659 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.039681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.039694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.039713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.039725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-05 23:04:06.039736 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.039748 | orchestrator | 2025-07-05 23:04:06.039759 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-07-05 23:04:06.039775 | orchestrator | Saturday 05 July 2025 22:58:16 +0000 (0:00:00.577) 0:00:21.924 ********* 2025-07-05 23:04:06.039787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.039799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.039819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.039830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.039849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.039861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-05 23:04:06.039878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.039890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.039902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-05 23:04:06.039920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.039939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.039950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d', '__omit_place_holder__505cd69b6832f6db0c364ccdc5cf6359f96bf96d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-05 23:04:06.039962 | orchestrator | 2025-07-05 23:04:06.039973 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-07-05 23:04:06.039985 | orchestrator | Saturday 05 July 2025 22:58:19 +0000 (0:00:02.868) 0:00:24.793 ********* 2025-07-05 23:04:06.040002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.040109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.040121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.040133 | orchestrator | 2025-07-05 23:04:06.040144 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-07-05 23:04:06.040156 | orchestrator | Saturday 05 July 2025 22:58:22 +0000 (0:00:03.563) 0:00:28.357 ********* 2025-07-05 23:04:06.040167 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-05 23:04:06.040179 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-05 23:04:06.040190 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-05 23:04:06.040202 | orchestrator | 2025-07-05 23:04:06.040213 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-07-05 23:04:06.040224 | orchestrator | Saturday 05 July 2025 22:58:25 +0000 (0:00:02.249) 0:00:30.607 ********* 2025-07-05 23:04:06.040235 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-05 23:04:06.040253 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-05 23:04:06.040264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-05 23:04:06.040275 | orchestrator | 2025-07-05 23:04:06.040300 | orchestrator | T2025-07-05 23:04:06 | INFO  | Task 5c164990-936b-48fb-a87c-53a6caa101cb is in state SUCCESS 2025-07-05 23:04:06.040311 | orchestrator | ASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-07-05 23:04:06.040323 | orchestrator | Saturday 05 July 2025 22:58:30 +0000 (0:00:05.480) 0:00:36.088 ********* 2025-07-05 23:04:06.040334 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.040345 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.040356 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.040367 | orchestrator | 2025-07-05 23:04:06.040378 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-07-05 23:04:06.040389 | orchestrator | Saturday 05 July 2025 22:58:31 +0000 (0:00:00.543) 0:00:36.631 ********* 2025-07-05 23:04:06.040401 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-05 23:04:06.040413 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-05 23:04:06.040424 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-05 23:04:06.040435 | orchestrator | 2025-07-05 23:04:06.040446 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-07-05 23:04:06.040457 | orchestrator | Saturday 05 July 2025 22:58:33 +0000 (0:00:02.316) 0:00:38.948 ********* 2025-07-05 23:04:06.040468 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-05 23:04:06.040479 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-05 23:04:06.040490 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-05 23:04:06.040501 | orchestrator | 2025-07-05 23:04:06.040512 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-07-05 23:04:06.040523 | orchestrator | Saturday 05 July 2025 22:58:35 +0000 (0:00:01.978) 0:00:40.926 ********* 2025-07-05 23:04:06.040535 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-07-05 23:04:06.040546 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-07-05 23:04:06.040557 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-07-05 23:04:06.040568 | orchestrator | 2025-07-05 23:04:06.040579 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-07-05 23:04:06.040591 | orchestrator | Saturday 05 July 2025 22:58:37 +0000 (0:00:01.847) 0:00:42.774 ********* 2025-07-05 23:04:06.040624 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-07-05 23:04:06.040636 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-07-05 23:04:06.040648 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-07-05 23:04:06.040659 | orchestrator | 2025-07-05 23:04:06.040670 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-05 23:04:06.040681 | orchestrator | Saturday 05 July 2025 22:58:39 +0000 (0:00:01.967) 0:00:44.741 ********* 2025-07-05 23:04:06.040697 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.040708 | orchestrator | 2025-07-05 23:04:06.040719 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-07-05 23:04:06.040731 | orchestrator | Saturday 05 July 2025 22:58:40 +0000 (0:00:00.911) 0:00:45.653 ********* 2025-07-05 23:04:06.040742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.040834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.040852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.040864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.040876 | orchestrator | 2025-07-05 23:04:06.040887 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-07-05 23:04:06.040899 | orchestrator | Saturday 05 July 2025 22:58:43 +0000 (0:00:03.397) 0:00:49.051 ********* 2025-07-05 23:04:06.040941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.040954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.040966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.040978 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.040990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041067 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.041079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041090 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.041102 | orchestrator | 2025-07-05 23:04:06.041113 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-07-05 23:04:06.041125 | orchestrator | Saturday 05 July 2025 22:58:44 +0000 (0:00:00.636) 0:00:49.687 ********* 2025-07-05 23:04:06.041136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041187 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.041199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041243 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.041254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041302 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.041313 | orchestrator | 2025-07-05 23:04:06.041324 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-05 23:04:06.041336 | orchestrator | Saturday 05 July 2025 22:58:45 +0000 (0:00:01.149) 0:00:50.837 ********* 2025-07-05 23:04:06.041347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041389 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.041401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041469 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.041486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041554 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.041573 | orchestrator | 2025-07-05 23:04:06.041591 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-05 23:04:06.041653 | orchestrator | Saturday 05 July 2025 22:58:46 +0000 (0:00:01.024) 0:00:51.862 ********* 2025-07-05 23:04:06.041667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041716 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.041734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041790 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.041802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041831 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.041843 | orchestrator | 2025-07-05 23:04:06.041854 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-05 23:04:06.041865 | orchestrator | Saturday 05 July 2025 22:58:47 +0000 (0:00:00.600) 0:00:52.462 ********* 2025-07-05 23:04:06.041882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041918 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.041938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.041950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.041968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.041980 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.041991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.042007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.042074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.042086 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.042097 | orchestrator | 2025-07-05 23:04:06.042109 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-07-05 23:04:06.042120 | orchestrator | Saturday 05 July 2025 22:58:48 +0000 (0:00:01.157) 0:00:53.620 ********* 2025-07-05 23:04:06.042132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.042152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.042172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.042183 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.042195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.042212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.042224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.042236 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.042247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.042266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.042284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.042296 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.042308 | orchestrator | 2025-07-05 23:04:06.042319 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-07-05 23:04:06.042331 | orchestrator | Saturday 05 July 2025 22:58:48 +0000 (0:00:00.635) 0:00:54.256 ********* 2025-07-05 23:04:06.042342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.042354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.042366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.042378 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.042389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.042401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.042434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.042446 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.042458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.042470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.042501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.042513 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.042525 | orchestrator | 2025-07-05 23:04:06.042536 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-07-05 23:04:06.042547 | orchestrator | Saturday 05 July 2025 22:58:49 +0000 (0:00:00.785) 0:00:55.041 ********* 2025-07-05 23:04:06.042564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.042576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.042635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.042656 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.042675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.042705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.042736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.042758 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.042784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-05 23:04:06.042805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-05 23:04:06.042838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-05 23:04:06.042859 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.042877 | orchestrator | 2025-07-05 23:04:06.042896 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-07-05 23:04:06.042916 | orchestrator | Saturday 05 July 2025 22:58:52 +0000 (0:00:02.787) 0:00:57.829 ********* 2025-07-05 23:04:06.042935 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-05 23:04:06.042961 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-05 23:04:06.042973 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-05 23:04:06.042984 | orchestrator | 2025-07-05 23:04:06.042995 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-07-05 23:04:06.043007 | orchestrator | Saturday 05 July 2025 22:58:53 +0000 (0:00:01.503) 0:00:59.332 ********* 2025-07-05 23:04:06.043018 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-05 23:04:06.043029 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-05 23:04:06.043040 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-05 23:04:06.043051 | orchestrator | 2025-07-05 23:04:06.043063 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-07-05 23:04:06.043074 | orchestrator | Saturday 05 July 2025 22:58:55 +0000 (0:00:01.492) 0:01:00.824 ********* 2025-07-05 23:04:06.043085 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-05 23:04:06.043096 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-05 23:04:06.043108 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-05 23:04:06.043119 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-05 23:04:06.043130 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.043141 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-05 23:04:06.043152 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.043164 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-05 23:04:06.043175 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.043187 | orchestrator | 2025-07-05 23:04:06.043198 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-07-05 23:04:06.043209 | orchestrator | Saturday 05 July 2025 22:58:56 +0000 (0:00:01.050) 0:01:01.874 ********* 2025-07-05 23:04:06.043221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.043251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.043264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-05 23:04:06.043283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.043295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.043306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-05 23:04:06.043318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.043335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.043354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-05 23:04:06.043366 | orchestrator | 2025-07-05 23:04:06.043377 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-07-05 23:04:06.043388 | orchestrator | Saturday 05 July 2025 22:58:59 +0000 (0:00:02.647) 0:01:04.522 ********* 2025-07-05 23:04:06.043399 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.043411 | orchestrator | 2025-07-05 23:04:06.043422 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-07-05 23:04:06.043433 | orchestrator | Saturday 05 July 2025 22:58:59 +0000 (0:00:00.702) 0:01:05.225 ********* 2025-07-05 23:04:06.043451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-05 23:04:06.043466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.043479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-05 23:04:06.043527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.043538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-05 23:04:06.043570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.043632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043684 | orchestrator | 2025-07-05 23:04:06.043699 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-07-05 23:04:06.043710 | orchestrator | Saturday 05 July 2025 22:59:03 +0000 (0:00:03.865) 0:01:09.091 ********* 2025-07-05 23:04:06.043722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-05 23:04:06.043742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.043754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043785 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.043802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-05 23:04:06.043814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.043826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043855 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.043867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-05 23:04:06.043878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.043896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.043925 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.043937 | orchestrator | 2025-07-05 23:04:06.043948 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-07-05 23:04:06.043960 | orchestrator | Saturday 05 July 2025 22:59:04 +0000 (0:00:00.883) 0:01:09.974 ********* 2025-07-05 23:04:06.043971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-05 23:04:06.043984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-05 23:04:06.043996 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.044008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-05 23:04:06.044019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-05 23:04:06.044031 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.044042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-05 23:04:06.044059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-05 23:04:06.044071 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.044083 | orchestrator | 2025-07-05 23:04:06.044094 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-07-05 23:04:06.044105 | orchestrator | Saturday 05 July 2025 22:59:05 +0000 (0:00:00.937) 0:01:10.912 ********* 2025-07-05 23:04:06.044117 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.044128 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.044139 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.044150 | orchestrator | 2025-07-05 23:04:06.044167 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-07-05 23:04:06.044179 | orchestrator | Saturday 05 July 2025 22:59:07 +0000 (0:00:01.612) 0:01:12.524 ********* 2025-07-05 23:04:06.044190 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.044201 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.044296 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.044309 | orchestrator | 2025-07-05 23:04:06.044321 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-07-05 23:04:06.044332 | orchestrator | Saturday 05 July 2025 22:59:09 +0000 (0:00:02.159) 0:01:14.683 ********* 2025-07-05 23:04:06.044343 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.044354 | orchestrator | 2025-07-05 23:04:06.044365 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-07-05 23:04:06.044377 | orchestrator | Saturday 05 July 2025 22:59:09 +0000 (0:00:00.665) 0:01:15.349 ********* 2025-07-05 23:04:06.044389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.044408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.044461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.044506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044530 | orchestrator | 2025-07-05 23:04:06.044541 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-07-05 23:04:06.044552 | orchestrator | Saturday 05 July 2025 22:59:13 +0000 (0:00:03.918) 0:01:19.268 ********* 2025-07-05 23:04:06.044571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.044591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044677 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.044694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.044706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044735 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.044752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.044764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.044784 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.044794 | orchestrator | 2025-07-05 23:04:06.044804 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-07-05 23:04:06.044815 | orchestrator | Saturday 05 July 2025 22:59:14 +0000 (0:00:00.864) 0:01:20.132 ********* 2025-07-05 23:04:06.044825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-05 23:04:06.044852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-05 23:04:06.044864 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.044874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-05 23:04:06.044885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-05 23:04:06.044895 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.044921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-05 23:04:06.044940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-05 23:04:06.044957 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.044979 | orchestrator | 2025-07-05 23:04:06.045001 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-07-05 23:04:06.045017 | orchestrator | Saturday 05 July 2025 22:59:15 +0000 (0:00:00.828) 0:01:20.960 ********* 2025-07-05 23:04:06.045034 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.045052 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.045069 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.045087 | orchestrator | 2025-07-05 23:04:06.045098 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-07-05 23:04:06.045108 | orchestrator | Saturday 05 July 2025 22:59:17 +0000 (0:00:01.534) 0:01:22.495 ********* 2025-07-05 23:04:06.045125 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.045136 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.045146 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.045156 | orchestrator | 2025-07-05 23:04:06.045166 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-07-05 23:04:06.045176 | orchestrator | Saturday 05 July 2025 22:59:19 +0000 (0:00:02.258) 0:01:24.753 ********* 2025-07-05 23:04:06.045185 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.045195 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.045205 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.045215 | orchestrator | 2025-07-05 23:04:06.045225 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-07-05 23:04:06.045235 | orchestrator | Saturday 05 July 2025 22:59:19 +0000 (0:00:00.509) 0:01:25.263 ********* 2025-07-05 23:04:06.045245 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.045255 | orchestrator | 2025-07-05 23:04:06.045265 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-07-05 23:04:06.045275 | orchestrator | Saturday 05 July 2025 22:59:20 +0000 (0:00:00.653) 0:01:25.916 ********* 2025-07-05 23:04:06.045285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-05 23:04:06.045304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-05 23:04:06.045325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-05 23:04:06.045335 | orchestrator | 2025-07-05 23:04:06.045345 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-07-05 23:04:06.045356 | orchestrator | Saturday 05 July 2025 22:59:22 +0000 (0:00:02.483) 0:01:28.400 ********* 2025-07-05 23:04:06.045371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-05 23:04:06.045382 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.045393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-05 23:04:06.045406 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.045423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-05 23:04:06.045440 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.045465 | orchestrator | 2025-07-05 23:04:06.045481 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-07-05 23:04:06.045497 | orchestrator | Saturday 05 July 2025 22:59:24 +0000 (0:00:01.907) 0:01:30.308 ********* 2025-07-05 23:04:06.045524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-05 23:04:06.045544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-05 23:04:06.045562 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.045579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-05 23:04:06.045596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-05 23:04:06.045694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-05 23:04:06.045713 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.045730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-05 23:04:06.045747 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.045763 | orchestrator | 2025-07-05 23:04:06.045779 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-07-05 23:04:06.045796 | orchestrator | Saturday 05 July 2025 22:59:26 +0000 (0:00:01.661) 0:01:31.969 ********* 2025-07-05 23:04:06.045814 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.045830 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.045847 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.045864 | orchestrator | 2025-07-05 23:04:06.045880 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-07-05 23:04:06.045896 | orchestrator | Saturday 05 July 2025 22:59:26 +0000 (0:00:00.407) 0:01:32.377 ********* 2025-07-05 23:04:06.045918 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.045940 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.045956 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.045972 | orchestrator | 2025-07-05 23:04:06.045989 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-07-05 23:04:06.046067 | orchestrator | Saturday 05 July 2025 22:59:28 +0000 (0:00:01.236) 0:01:33.613 ********* 2025-07-05 23:04:06.046089 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.046105 | orchestrator | 2025-07-05 23:04:06.046120 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-07-05 23:04:06.046135 | orchestrator | Saturday 05 July 2025 22:59:29 +0000 (0:00:00.931) 0:01:34.545 ********* 2025-07-05 23:04:06.046159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.046177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.046254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.046314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046393 | orchestrator | 2025-07-05 23:04:06.046409 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-07-05 23:04:06.046423 | orchestrator | Saturday 05 July 2025 22:59:33 +0000 (0:00:04.640) 0:01:39.185 ********* 2025-07-05 23:04:06.046438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.046461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046514 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.046535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.046551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046626 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.046636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.046645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.046676 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.046684 | orchestrator | 2025-07-05 23:04:06.046692 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-07-05 23:04:06.046701 | orchestrator | Saturday 05 July 2025 22:59:35 +0000 (0:00:01.240) 0:01:40.426 ********* 2025-07-05 23:04:06.046715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-05 23:04:06.046724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-05 23:04:06.046738 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.046747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-05 23:04:06.046755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-05 23:04:06.046763 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.046772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-05 23:04:06.046780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-05 23:04:06.046788 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.046796 | orchestrator | 2025-07-05 23:04:06.046804 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-07-05 23:04:06.046813 | orchestrator | Saturday 05 July 2025 22:59:35 +0000 (0:00:00.946) 0:01:41.372 ********* 2025-07-05 23:04:06.046821 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.046829 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.046837 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.046845 | orchestrator | 2025-07-05 23:04:06.046853 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-07-05 23:04:06.046861 | orchestrator | Saturday 05 July 2025 22:59:37 +0000 (0:00:01.285) 0:01:42.657 ********* 2025-07-05 23:04:06.046869 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.046877 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.046885 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.046893 | orchestrator | 2025-07-05 23:04:06.046901 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-07-05 23:04:06.046910 | orchestrator | Saturday 05 July 2025 22:59:39 +0000 (0:00:01.871) 0:01:44.529 ********* 2025-07-05 23:04:06.046918 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.046926 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.046934 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.046942 | orchestrator | 2025-07-05 23:04:06.046950 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-07-05 23:04:06.046958 | orchestrator | Saturday 05 July 2025 22:59:39 +0000 (0:00:00.270) 0:01:44.799 ********* 2025-07-05 23:04:06.046966 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.046976 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.046990 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.047003 | orchestrator | 2025-07-05 23:04:06.047017 | orchestrator | TASK [include_role : designate] ************************************************ 2025-07-05 23:04:06.047037 | orchestrator | Saturday 05 July 2025 22:59:39 +0000 (0:00:00.495) 0:01:45.295 ********* 2025-07-05 23:04:06.047051 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.047064 | orchestrator | 2025-07-05 23:04:06.047079 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-07-05 23:04:06.047092 | orchestrator | Saturday 05 July 2025 22:59:40 +0000 (0:00:00.779) 0:01:46.074 ********* 2025-07-05 23:04:06.047107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:04:06.047148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:04:06.047159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:04:06.047227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:04:06.047236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:04:06.047304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:04:06.047312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047363 | orchestrator | 2025-07-05 23:04:06.047372 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-07-05 23:04:06.047380 | orchestrator | Saturday 05 July 2025 22:59:44 +0000 (0:00:04.090) 0:01:50.165 ********* 2025-07-05 23:04:06.047395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:04:06.047404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:04:06.047413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047474 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.047483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:04:06.047491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:04:06.047504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:04:06.047550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:04:06.047567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047581 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.047593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.047666 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.047674 | orchestrator | 2025-07-05 23:04:06.047683 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-07-05 23:04:06.047691 | orchestrator | Saturday 05 July 2025 22:59:45 +0000 (0:00:00.903) 0:01:51.069 ********* 2025-07-05 23:04:06.047700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-05 23:04:06.047708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-05 23:04:06.047722 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.047730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-05 23:04:06.047738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-05 23:04:06.047747 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.047755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-05 23:04:06.047767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-05 23:04:06.047775 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.047784 | orchestrator | 2025-07-05 23:04:06.047792 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-07-05 23:04:06.047800 | orchestrator | Saturday 05 July 2025 22:59:46 +0000 (0:00:01.067) 0:01:52.136 ********* 2025-07-05 23:04:06.047808 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.047816 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.047824 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.047832 | orchestrator | 2025-07-05 23:04:06.047840 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-07-05 23:04:06.047848 | orchestrator | Saturday 05 July 2025 22:59:48 +0000 (0:00:01.860) 0:01:53.997 ********* 2025-07-05 23:04:06.047856 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.047865 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.047873 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.047881 | orchestrator | 2025-07-05 23:04:06.047889 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-07-05 23:04:06.047897 | orchestrator | Saturday 05 July 2025 22:59:50 +0000 (0:00:01.953) 0:01:55.950 ********* 2025-07-05 23:04:06.047905 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.047913 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.047921 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.047929 | orchestrator | 2025-07-05 23:04:06.047937 | orchestrator | TASK [include_role : glance] *************************************************** 2025-07-05 23:04:06.047946 | orchestrator | Saturday 05 July 2025 22:59:50 +0000 (0:00:00.297) 0:01:56.248 ********* 2025-07-05 23:04:06.047954 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.047962 | orchestrator | 2025-07-05 23:04:06.047970 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-07-05 23:04:06.047978 | orchestrator | Saturday 05 July 2025 22:59:51 +0000 (0:00:00.806) 0:01:57.054 ********* 2025-07-05 23:04:06.047994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:04:06.048015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.048031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:04:06.048050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.048066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:04:06.048080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.048094 | orchestrator | 2025-07-05 23:04:06.048112 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-07-05 23:04:06.048126 | orchestrator | Saturday 05 July 2025 22:59:55 +0000 (0:00:03.922) 0:02:00.977 ********* 2025-07-05 23:04:06.048147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-05 23:04:06.048163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.048186 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.048208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-05 23:04:06.048231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.048249 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.048262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-05 23:04:06.048278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.048292 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.048301 | orchestrator | 2025-07-05 23:04:06.048309 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-07-05 23:04:06.048317 | orchestrator | Saturday 05 July 2025 22:59:58 +0000 (0:00:02.565) 0:02:03.542 ********* 2025-07-05 23:04:06.048326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-05 23:04:06.048335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-05 23:04:06.048347 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.048356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-05 23:04:06.048365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-05 23:04:06.048373 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.048382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-05 23:04:06.048400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-05 23:04:06.048409 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.048417 | orchestrator | 2025-07-05 23:04:06.048425 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-07-05 23:04:06.048434 | orchestrator | Saturday 05 July 2025 23:00:01 +0000 (0:00:03.142) 0:02:06.685 ********* 2025-07-05 23:04:06.048442 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.048450 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.048458 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.048466 | orchestrator | 2025-07-05 23:04:06.048474 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-07-05 23:04:06.048483 | orchestrator | Saturday 05 July 2025 23:00:03 +0000 (0:00:01.771) 0:02:08.456 ********* 2025-07-05 23:04:06.048490 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.048499 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.048507 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.048515 | orchestrator | 2025-07-05 23:04:06.048523 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-07-05 23:04:06.048531 | orchestrator | Saturday 05 July 2025 23:00:05 +0000 (0:00:02.066) 0:02:10.523 ********* 2025-07-05 23:04:06.048539 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.048547 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.048555 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.048563 | orchestrator | 2025-07-05 23:04:06.048571 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-07-05 23:04:06.048579 | orchestrator | Saturday 05 July 2025 23:00:05 +0000 (0:00:00.323) 0:02:10.847 ********* 2025-07-05 23:04:06.048587 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.048595 | orchestrator | 2025-07-05 23:04:06.048657 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-07-05 23:04:06.048666 | orchestrator | Saturday 05 July 2025 23:00:06 +0000 (0:00:00.838) 0:02:11.685 ********* 2025-07-05 23:04:06.048864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:04:06.048888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:04:06.048906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:04:06.048915 | orchestrator | 2025-07-05 23:04:06.048923 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-07-05 23:04:06.048932 | orchestrator | Saturday 05 July 2025 23:00:09 +0000 (0:00:03.528) 0:02:15.214 ********* 2025-07-05 23:04:06.048940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-05 23:04:06.048949 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.048958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-05 23:04:06.048966 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.048975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-05 23:04:06.048983 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.048991 | orchestrator | 2025-07-05 23:04:06.049000 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-07-05 23:04:06.049008 | orchestrator | Saturday 05 July 2025 23:00:10 +0000 (0:00:00.426) 0:02:15.640 ********* 2025-07-05 23:04:06.049022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-05 23:04:06.049035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-05 23:04:06.049044 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.049052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-05 23:04:06.049065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-05 23:04:06.049073 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.049082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-05 23:04:06.049090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-05 23:04:06.049098 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.049106 | orchestrator | 2025-07-05 23:04:06.049114 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-07-05 23:04:06.049123 | orchestrator | Saturday 05 July 2025 23:00:10 +0000 (0:00:00.757) 0:02:16.398 ********* 2025-07-05 23:04:06.049131 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.049139 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.049147 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.049154 | orchestrator | 2025-07-05 23:04:06.049161 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-07-05 23:04:06.049168 | orchestrator | Saturday 05 July 2025 23:00:12 +0000 (0:00:01.606) 0:02:18.005 ********* 2025-07-05 23:04:06.049175 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.049181 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.049188 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.049195 | orchestrator | 2025-07-05 23:04:06.049202 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-07-05 23:04:06.049213 | orchestrator | Saturday 05 July 2025 23:00:14 +0000 (0:00:01.999) 0:02:20.004 ********* 2025-07-05 23:04:06.049224 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.049235 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.049246 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.049258 | orchestrator | 2025-07-05 23:04:06.049269 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-07-05 23:04:06.049281 | orchestrator | Saturday 05 July 2025 23:00:14 +0000 (0:00:00.310) 0:02:20.315 ********* 2025-07-05 23:04:06.049293 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.049303 | orchestrator | 2025-07-05 23:04:06.049310 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-07-05 23:04:06.049317 | orchestrator | Saturday 05 July 2025 23:00:15 +0000 (0:00:00.998) 0:02:21.313 ********* 2025-07-05 23:04:06.049336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:04:06.049351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:04:06.049368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:04:06.049381 | orchestrator | 2025-07-05 23:04:06.049388 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-07-05 23:04:06.049395 | orchestrator | Saturday 05 July 2025 23:00:19 +0000 (0:00:03.644) 0:02:24.957 ********* 2025-07-05 23:04:06.049403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-05 23:04:06.049410 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.049426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-05 23:04:06.049439 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.049449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-05 23:04:06.049466 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.049474 | orchestrator | 2025-07-05 23:04:06.049482 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-07-05 23:04:06.049490 | orchestrator | Saturday 05 July 2025 23:00:20 +0000 (0:00:00.794) 0:02:25.752 ********* 2025-07-05 23:04:06.049499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-05 23:04:06.049512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-05 23:04:06.049525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-05 23:04:06.049535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-05 23:04:06.049544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-05 23:04:06.049552 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.049561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-05 23:04:06.049569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-05 23:04:06.049577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-05 23:04:06.049586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-05 23:04:06.049594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-05 23:04:06.049620 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.049628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-05 23:04:06.049641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-05 23:04:06.049650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-05 23:04:06.049658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-05 23:04:06.049666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-05 23:04:06.049675 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.049683 | orchestrator | 2025-07-05 23:04:06.049695 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-07-05 23:04:06.049703 | orchestrator | Saturday 05 July 2025 23:00:21 +0000 (0:00:01.207) 0:02:26.959 ********* 2025-07-05 23:04:06.049711 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.049719 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.049731 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.049739 | orchestrator | 2025-07-05 23:04:06.049747 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-07-05 23:04:06.049755 | orchestrator | Saturday 05 July 2025 23:00:23 +0000 (0:00:01.618) 0:02:28.577 ********* 2025-07-05 23:04:06.049762 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.049771 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.049780 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.049787 | orchestrator | 2025-07-05 23:04:06.049795 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-07-05 23:04:06.049802 | orchestrator | Saturday 05 July 2025 23:00:25 +0000 (0:00:02.348) 0:02:30.926 ********* 2025-07-05 23:04:06.049809 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.049816 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.049823 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.049830 | orchestrator | 2025-07-05 23:04:06.049837 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-07-05 23:04:06.049844 | orchestrator | Saturday 05 July 2025 23:00:25 +0000 (0:00:00.403) 0:02:31.330 ********* 2025-07-05 23:04:06.049851 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.049858 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.049865 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.049872 | orchestrator | 2025-07-05 23:04:06.049879 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-07-05 23:04:06.049886 | orchestrator | Saturday 05 July 2025 23:00:26 +0000 (0:00:00.366) 0:02:31.696 ********* 2025-07-05 23:04:06.049893 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.049900 | orchestrator | 2025-07-05 23:04:06.049907 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-07-05 23:04:06.049914 | orchestrator | Saturday 05 July 2025 23:00:27 +0000 (0:00:01.172) 0:02:32.868 ********* 2025-07-05 23:04:06.049922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:04:06.049934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:04:06.049942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:04:06.049957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:04:06.049966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:04:06.049973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:04:06.049985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:04:06.049993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:04:06.050000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:04:06.050007 | orchestrator | 2025-07-05 23:04:06.050051 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-07-05 23:04:06.050061 | orchestrator | Saturday 05 July 2025 23:00:32 +0000 (0:00:04.637) 0:02:37.506 ********* 2025-07-05 23:04:06.050069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-05 23:04:06.050101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:04:06.050115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:04:06.050123 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.050135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-05 23:04:06.050148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:04:06.050178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:04:06.050191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-05 23:04:06.050212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:04:06.050221 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.050228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:04:06.050235 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.050242 | orchestrator | 2025-07-05 23:04:06.050249 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-07-05 23:04:06.050256 | orchestrator | Saturday 05 July 2025 23:00:32 +0000 (0:00:00.777) 0:02:38.284 ********* 2025-07-05 23:04:06.050264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-05 23:04:06.050272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-05 23:04:06.050280 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.050286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-05 23:04:06.050298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-05 23:04:06.050306 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.050317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-05 23:04:06.050329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-05 23:04:06.050341 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.050358 | orchestrator | 2025-07-05 23:04:06.050369 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-07-05 23:04:06.050381 | orchestrator | Saturday 05 July 2025 23:00:34 +0000 (0:00:01.185) 0:02:39.469 ********* 2025-07-05 23:04:06.050392 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.050404 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.050413 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.050420 | orchestrator | 2025-07-05 23:04:06.050427 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-07-05 23:04:06.050434 | orchestrator | Saturday 05 July 2025 23:00:35 +0000 (0:00:01.297) 0:02:40.767 ********* 2025-07-05 23:04:06.050441 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.050448 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.050455 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.050461 | orchestrator | 2025-07-05 23:04:06.050468 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-07-05 23:04:06.050475 | orchestrator | Saturday 05 July 2025 23:00:37 +0000 (0:00:02.004) 0:02:42.771 ********* 2025-07-05 23:04:06.050482 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.050489 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.050495 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.050502 | orchestrator | 2025-07-05 23:04:06.050509 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-07-05 23:04:06.050516 | orchestrator | Saturday 05 July 2025 23:00:37 +0000 (0:00:00.313) 0:02:43.085 ********* 2025-07-05 23:04:06.050523 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.050530 | orchestrator | 2025-07-05 23:04:06.050536 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-07-05 23:04:06.050543 | orchestrator | Saturday 05 July 2025 23:00:38 +0000 (0:00:01.198) 0:02:44.284 ********* 2025-07-05 23:04:06.050551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:04:06.050558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.050575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:04:06.050588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.050595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:04:06.050624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.050634 | orchestrator | 2025-07-05 23:04:06.050642 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-07-05 23:04:06.050649 | orchestrator | Saturday 05 July 2025 23:00:42 +0000 (0:00:03.215) 0:02:47.499 ********* 2025-07-05 23:04:06.050656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:04:06.050679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.050687 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.050694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:04:06.050701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.050708 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.050716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:04:06.050723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.050734 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.050742 | orchestrator | 2025-07-05 23:04:06.050748 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-07-05 23:04:06.050756 | orchestrator | Saturday 05 July 2025 23:00:42 +0000 (0:00:00.650) 0:02:48.149 ********* 2025-07-05 23:04:06.050766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-05 23:04:06.050777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-05 23:04:06.050784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-05 23:04:06.050792 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.050799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-05 23:04:06.050806 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.050813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-05 23:04:06.050820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-05 23:04:06.050844 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.050851 | orchestrator | 2025-07-05 23:04:06.050858 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-07-05 23:04:06.050865 | orchestrator | Saturday 05 July 2025 23:00:44 +0000 (0:00:01.576) 0:02:49.726 ********* 2025-07-05 23:04:06.050872 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.050879 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.050886 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.050893 | orchestrator | 2025-07-05 23:04:06.050899 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-07-05 23:04:06.050906 | orchestrator | Saturday 05 July 2025 23:00:45 +0000 (0:00:01.314) 0:02:51.040 ********* 2025-07-05 23:04:06.050913 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.050920 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.050927 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.050934 | orchestrator | 2025-07-05 23:04:06.050941 | orchestrator | TASK [include_role : manila] *************************************************** 2025-07-05 23:04:06.050948 | orchestrator | Saturday 05 July 2025 23:00:47 +0000 (0:00:02.112) 0:02:53.152 ********* 2025-07-05 23:04:06.050955 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.050962 | orchestrator | 2025-07-05 23:04:06.050968 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-07-05 23:04:06.050975 | orchestrator | Saturday 05 July 2025 23:00:48 +0000 (0:00:01.065) 0:02:54.217 ********* 2025-07-05 23:04:06.050983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-05 23:04:06.050995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-05 23:04:06.051033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-05 23:04:06.051074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051096 | orchestrator | 2025-07-05 23:04:06.051102 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-07-05 23:04:06.051110 | orchestrator | Saturday 05 July 2025 23:00:52 +0000 (0:00:04.061) 0:02:58.279 ********* 2025-07-05 23:04:06.051121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-05 23:04:06.051129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051158 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.051166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-05 23:04:06.051173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051198 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.051213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-05 23:04:06.051221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.051249 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.051257 | orchestrator | 2025-07-05 23:04:06.051264 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-07-05 23:04:06.051271 | orchestrator | Saturday 05 July 2025 23:00:53 +0000 (0:00:00.718) 0:02:58.997 ********* 2025-07-05 23:04:06.051278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-05 23:04:06.051285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-05 23:04:06.051292 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.051299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-05 23:04:06.051306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-05 23:04:06.051313 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.051320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-05 23:04:06.051327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-05 23:04:06.051334 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.051341 | orchestrator | 2025-07-05 23:04:06.051348 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-07-05 23:04:06.051355 | orchestrator | Saturday 05 July 2025 23:00:54 +0000 (0:00:00.870) 0:02:59.868 ********* 2025-07-05 23:04:06.051362 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.051368 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.051375 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.051382 | orchestrator | 2025-07-05 23:04:06.051389 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-07-05 23:04:06.051396 | orchestrator | Saturday 05 July 2025 23:00:56 +0000 (0:00:01.629) 0:03:01.497 ********* 2025-07-05 23:04:06.051407 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.051415 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.051427 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.051438 | orchestrator | 2025-07-05 23:04:06.051449 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-07-05 23:04:06.051464 | orchestrator | Saturday 05 July 2025 23:00:58 +0000 (0:00:02.032) 0:03:03.530 ********* 2025-07-05 23:04:06.051476 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.051489 | orchestrator | 2025-07-05 23:04:06.051498 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-07-05 23:04:06.051505 | orchestrator | Saturday 05 July 2025 23:00:59 +0000 (0:00:01.116) 0:03:04.646 ********* 2025-07-05 23:04:06.051512 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-05 23:04:06.051519 | orchestrator | 2025-07-05 23:04:06.051526 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-07-05 23:04:06.051533 | orchestrator | Saturday 05 July 2025 23:01:02 +0000 (0:00:02.949) 0:03:07.595 ********* 2025-07-05 23:04:06.051557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:04:06.051566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-05 23:04:06.051574 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.051593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:04:06.051623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-05 23:04:06.051631 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.051639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:04:06.051646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-05 23:04:06.051654 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.051661 | orchestrator | 2025-07-05 23:04:06.051668 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-07-05 23:04:06.051679 | orchestrator | Saturday 05 July 2025 23:01:04 +0000 (0:00:02.718) 0:03:10.314 ********* 2025-07-05 23:04:06.051691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:04:06.051726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-05 23:04:06.051742 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.051770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:04:06.051791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-05 23:04:06.051801 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.051812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:04:06.051823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-05 23:04:06.051835 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.051846 | orchestrator | 2025-07-05 23:04:06.051858 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-07-05 23:04:06.051868 | orchestrator | Saturday 05 July 2025 23:01:07 +0000 (0:00:02.587) 0:03:12.901 ********* 2025-07-05 23:04:06.051880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-05 23:04:06.051897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-05 23:04:06.051905 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.051912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-05 23:04:06.051919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-05 23:04:06.051927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-05 23:04:06.051934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-05 23:04:06.051941 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.051948 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.051955 | orchestrator | 2025-07-05 23:04:06.051962 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-07-05 23:04:06.051969 | orchestrator | Saturday 05 July 2025 23:01:10 +0000 (0:00:02.620) 0:03:15.522 ********* 2025-07-05 23:04:06.051976 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.051983 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.051990 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.051997 | orchestrator | 2025-07-05 23:04:06.052004 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-07-05 23:04:06.052011 | orchestrator | Saturday 05 July 2025 23:01:12 +0000 (0:00:02.179) 0:03:17.702 ********* 2025-07-05 23:04:06.052018 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.052028 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.052036 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.052043 | orchestrator | 2025-07-05 23:04:06.052049 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-07-05 23:04:06.052056 | orchestrator | Saturday 05 July 2025 23:01:13 +0000 (0:00:01.396) 0:03:19.098 ********* 2025-07-05 23:04:06.052063 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.052070 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.052077 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.052083 | orchestrator | 2025-07-05 23:04:06.052090 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-07-05 23:04:06.052097 | orchestrator | Saturday 05 July 2025 23:01:14 +0000 (0:00:00.382) 0:03:19.481 ********* 2025-07-05 23:04:06.052107 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.052115 | orchestrator | 2025-07-05 23:04:06.052121 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-07-05 23:04:06.052132 | orchestrator | Saturday 05 July 2025 23:01:15 +0000 (0:00:01.131) 0:03:20.612 ********* 2025-07-05 23:04:06.052139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-05 23:04:06.052147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-05 23:04:06.052155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-05 23:04:06.052162 | orchestrator | 2025-07-05 23:04:06.052169 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-07-05 23:04:06.052176 | orchestrator | Saturday 05 July 2025 23:01:17 +0000 (0:00:01.824) 0:03:22.437 ********* 2025-07-05 23:04:06.052183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-05 23:04:06.052198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-05 23:04:06.052206 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.052213 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.052223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-05 23:04:06.052231 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.052238 | orchestrator | 2025-07-05 23:04:06.052245 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-07-05 23:04:06.052252 | orchestrator | Saturday 05 July 2025 23:01:17 +0000 (0:00:00.400) 0:03:22.838 ********* 2025-07-05 23:04:06.052259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-05 23:04:06.052267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-05 23:04:06.052274 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.052281 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.052288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-05 23:04:06.052295 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.052302 | orchestrator | 2025-07-05 23:04:06.052309 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-07-05 23:04:06.052315 | orchestrator | Saturday 05 July 2025 23:01:18 +0000 (0:00:00.576) 0:03:23.414 ********* 2025-07-05 23:04:06.052322 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.052329 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.052336 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.052347 | orchestrator | 2025-07-05 23:04:06.052354 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-07-05 23:04:06.052361 | orchestrator | Saturday 05 July 2025 23:01:18 +0000 (0:00:00.756) 0:03:24.171 ********* 2025-07-05 23:04:06.052367 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.052374 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.052381 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.052387 | orchestrator | 2025-07-05 23:04:06.052394 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-07-05 23:04:06.052401 | orchestrator | Saturday 05 July 2025 23:01:20 +0000 (0:00:01.261) 0:03:25.433 ********* 2025-07-05 23:04:06.052408 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.052415 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.052422 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.052428 | orchestrator | 2025-07-05 23:04:06.052435 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-07-05 23:04:06.052442 | orchestrator | Saturday 05 July 2025 23:01:20 +0000 (0:00:00.298) 0:03:25.732 ********* 2025-07-05 23:04:06.052449 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.052456 | orchestrator | 2025-07-05 23:04:06.052462 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-07-05 23:04:06.052469 | orchestrator | Saturday 05 July 2025 23:01:21 +0000 (0:00:01.389) 0:03:27.121 ********* 2025-07-05 23:04:06.052688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:04:06.052715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.052724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.052732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.052747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-05 23:04:06.052755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.052812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.052835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.052848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.052860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.052887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.052899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-05 23:04:06.052909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.053004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.053034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.053053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:04:06.053144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:04:06.053175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-05 23:04:06.053285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-05 23:04:06.053313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.053321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.053414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.053422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.053442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.053449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.053514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-05 23:04:06.053542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-05 23:04:06.053550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.053557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.053564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.053716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.053729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.053741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.053842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053888 | orchestrator | 2025-07-05 23:04:06.053900 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-07-05 23:04:06.053911 | orchestrator | Saturday 05 July 2025 23:01:25 +0000 (0:00:04.262) 0:03:31.384 ********* 2025-07-05 23:04:06.053922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:04:06.053934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.053946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-05 23:04:06.054077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:04:06.054092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.054194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-05 23:04:06.054268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:04:06.054343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.054382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.054465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.054472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054492 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.054499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-05 23:04:06.054557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.054700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.054719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.054726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054747 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.054754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-05 23:04:06.054809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-05 23:04:06.054830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:04:06.054840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.054857 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.054868 | orchestrator | 2025-07-05 23:04:06.054879 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-07-05 23:04:06.054888 | orchestrator | Saturday 05 July 2025 23:01:27 +0000 (0:00:01.482) 0:03:32.866 ********* 2025-07-05 23:04:06.054895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-05 23:04:06.054902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-05 23:04:06.054909 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.054940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-05 23:04:06.054952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-05 23:04:06.054958 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.054965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-05 23:04:06.054971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-05 23:04:06.054978 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.054984 | orchestrator | 2025-07-05 23:04:06.054991 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-07-05 23:04:06.054997 | orchestrator | Saturday 05 July 2025 23:01:29 +0000 (0:00:02.021) 0:03:34.887 ********* 2025-07-05 23:04:06.055003 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.055010 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.055017 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.055023 | orchestrator | 2025-07-05 23:04:06.055029 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-07-05 23:04:06.055036 | orchestrator | Saturday 05 July 2025 23:01:30 +0000 (0:00:01.310) 0:03:36.198 ********* 2025-07-05 23:04:06.055042 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.055048 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.055055 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.055061 | orchestrator | 2025-07-05 23:04:06.055067 | orchestrator | TASK [include_role : placement] ************************************************ 2025-07-05 23:04:06.055074 | orchestrator | Saturday 05 July 2025 23:01:32 +0000 (0:00:01.914) 0:03:38.113 ********* 2025-07-05 23:04:06.055080 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.055087 | orchestrator | 2025-07-05 23:04:06.055093 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-07-05 23:04:06.055099 | orchestrator | Saturday 05 July 2025 23:01:33 +0000 (0:00:01.128) 0:03:39.241 ********* 2025-07-05 23:04:06.055106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.055119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.055148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.055156 | orchestrator | 2025-07-05 23:04:06.055162 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-07-05 23:04:06.055169 | orchestrator | Saturday 05 July 2025 23:01:37 +0000 (0:00:03.563) 0:03:42.805 ********* 2025-07-05 23:04:06.055175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.055182 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.055189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.055199 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.055206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.055213 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.055220 | orchestrator | 2025-07-05 23:04:06.055226 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-07-05 23:04:06.055232 | orchestrator | Saturday 05 July 2025 23:01:37 +0000 (0:00:00.514) 0:03:43.319 ********* 2025-07-05 23:04:06.055239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055253 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.055277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055294 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.055301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055314 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.055322 | orchestrator | 2025-07-05 23:04:06.055329 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-07-05 23:04:06.055337 | orchestrator | Saturday 05 July 2025 23:01:38 +0000 (0:00:00.756) 0:03:44.075 ********* 2025-07-05 23:04:06.055344 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.055356 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.055363 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.055370 | orchestrator | 2025-07-05 23:04:06.055378 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-07-05 23:04:06.055386 | orchestrator | Saturday 05 July 2025 23:01:40 +0000 (0:00:01.680) 0:03:45.756 ********* 2025-07-05 23:04:06.055394 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.055401 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.055408 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.055416 | orchestrator | 2025-07-05 23:04:06.055424 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-07-05 23:04:06.055431 | orchestrator | Saturday 05 July 2025 23:01:42 +0000 (0:00:02.128) 0:03:47.885 ********* 2025-07-05 23:04:06.055439 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.055446 | orchestrator | 2025-07-05 23:04:06.055454 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-07-05 23:04:06.055462 | orchestrator | Saturday 05 July 2025 23:01:43 +0000 (0:00:01.276) 0:03:49.161 ********* 2025-07-05 23:04:06.055471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.055480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.055531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.055578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055619 | orchestrator | 2025-07-05 23:04:06.055628 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-07-05 23:04:06.055636 | orchestrator | Saturday 05 July 2025 23:01:48 +0000 (0:00:04.272) 0:03:53.433 ********* 2025-07-05 23:04:06.055644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.055653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055669 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.055700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.055712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055726 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.055733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.055740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.055783 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.055790 | orchestrator | 2025-07-05 23:04:06.055796 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-07-05 23:04:06.055803 | orchestrator | Saturday 05 July 2025 23:01:48 +0000 (0:00:00.822) 0:03:54.255 ********* 2025-07-05 23:04:06.055809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055837 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.055843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055869 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.055875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-05 23:04:06.055907 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.055917 | orchestrator | 2025-07-05 23:04:06.055928 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-07-05 23:04:06.055938 | orchestrator | Saturday 05 July 2025 23:01:49 +0000 (0:00:00.822) 0:03:55.078 ********* 2025-07-05 23:04:06.055948 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.055964 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.055974 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.055985 | orchestrator | 2025-07-05 23:04:06.055996 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-07-05 23:04:06.056007 | orchestrator | Saturday 05 July 2025 23:01:51 +0000 (0:00:01.694) 0:03:56.772 ********* 2025-07-05 23:04:06.056017 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.056024 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.056030 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.056036 | orchestrator | 2025-07-05 23:04:06.056043 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-07-05 23:04:06.056049 | orchestrator | Saturday 05 July 2025 23:01:53 +0000 (0:00:02.161) 0:03:58.934 ********* 2025-07-05 23:04:06.056055 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.056062 | orchestrator | 2025-07-05 23:04:06.056068 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-07-05 23:04:06.056097 | orchestrator | Saturday 05 July 2025 23:01:55 +0000 (0:00:01.563) 0:04:00.497 ********* 2025-07-05 23:04:06.056109 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-07-05 23:04:06.056115 | orchestrator | 2025-07-05 23:04:06.056122 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-07-05 23:04:06.056128 | orchestrator | Saturday 05 July 2025 23:01:56 +0000 (0:00:01.118) 0:04:01.616 ********* 2025-07-05 23:04:06.056135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-05 23:04:06.056142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-05 23:04:06.056149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-05 23:04:06.056156 | orchestrator | 2025-07-05 23:04:06.056162 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-07-05 23:04:06.056169 | orchestrator | Saturday 05 July 2025 23:02:00 +0000 (0:00:03.856) 0:04:05.473 ********* 2025-07-05 23:04:06.056175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-05 23:04:06.056182 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.056193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-05 23:04:06.056200 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.056207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-05 23:04:06.056213 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.056219 | orchestrator | 2025-07-05 23:04:06.056226 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-07-05 23:04:06.056232 | orchestrator | Saturday 05 July 2025 23:02:01 +0000 (0:00:01.425) 0:04:06.898 ********* 2025-07-05 23:04:06.056256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-05 23:04:06.056268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-05 23:04:06.056275 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.056282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-05 23:04:06.056289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-05 23:04:06.056295 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.056302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-05 23:04:06.056308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-05 23:04:06.056315 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.056321 | orchestrator | 2025-07-05 23:04:06.056327 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-05 23:04:06.056334 | orchestrator | Saturday 05 July 2025 23:02:03 +0000 (0:00:01.904) 0:04:08.802 ********* 2025-07-05 23:04:06.056340 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.056347 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.056353 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.056359 | orchestrator | 2025-07-05 23:04:06.056366 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-05 23:04:06.056372 | orchestrator | Saturday 05 July 2025 23:02:06 +0000 (0:00:02.668) 0:04:11.471 ********* 2025-07-05 23:04:06.056379 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.056390 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.056397 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.056403 | orchestrator | 2025-07-05 23:04:06.056409 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-07-05 23:04:06.056416 | orchestrator | Saturday 05 July 2025 23:02:09 +0000 (0:00:03.216) 0:04:14.687 ********* 2025-07-05 23:04:06.056423 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-07-05 23:04:06.056429 | orchestrator | 2025-07-05 23:04:06.056436 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-07-05 23:04:06.056442 | orchestrator | Saturday 05 July 2025 23:02:10 +0000 (0:00:00.883) 0:04:15.571 ********* 2025-07-05 23:04:06.056449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-05 23:04:06.056455 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.056462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-05 23:04:06.056468 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.056492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-05 23:04:06.056500 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.056506 | orchestrator | 2025-07-05 23:04:06.056512 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-07-05 23:04:06.056522 | orchestrator | Saturday 05 July 2025 23:02:11 +0000 (0:00:01.419) 0:04:16.991 ********* 2025-07-05 23:04:06.056529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-05 23:04:06.056536 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.056542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-05 23:04:06.056553 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.056560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-05 23:04:06.056566 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.056573 | orchestrator | 2025-07-05 23:04:06.056579 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-07-05 23:04:06.056585 | orchestrator | Saturday 05 July 2025 23:02:13 +0000 (0:00:01.626) 0:04:18.617 ********* 2025-07-05 23:04:06.056592 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.056614 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.056622 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.056628 | orchestrator | 2025-07-05 23:04:06.056634 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-05 23:04:06.056641 | orchestrator | Saturday 05 July 2025 23:02:14 +0000 (0:00:01.280) 0:04:19.897 ********* 2025-07-05 23:04:06.056647 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.056654 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.056660 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.056666 | orchestrator | 2025-07-05 23:04:06.056672 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-05 23:04:06.056679 | orchestrator | Saturday 05 July 2025 23:02:16 +0000 (0:00:02.335) 0:04:22.232 ********* 2025-07-05 23:04:06.056685 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.056692 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.056698 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.056704 | orchestrator | 2025-07-05 23:04:06.056711 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-07-05 23:04:06.056717 | orchestrator | Saturday 05 July 2025 23:02:19 +0000 (0:00:02.705) 0:04:24.938 ********* 2025-07-05 23:04:06.056723 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-07-05 23:04:06.056730 | orchestrator | 2025-07-05 23:04:06.056736 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-07-05 23:04:06.056743 | orchestrator | Saturday 05 July 2025 23:02:20 +0000 (0:00:01.074) 0:04:26.012 ********* 2025-07-05 23:04:06.056749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-05 23:04:06.056756 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.056785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-05 23:04:06.056793 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.056804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-05 23:04:06.056811 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.056817 | orchestrator | 2025-07-05 23:04:06.056823 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-07-05 23:04:06.056830 | orchestrator | Saturday 05 July 2025 23:02:21 +0000 (0:00:01.043) 0:04:27.056 ********* 2025-07-05 23:04:06.056836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-05 23:04:06.056843 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.056850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-05 23:04:06.056856 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.056863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-05 23:04:06.056869 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.056876 | orchestrator | 2025-07-05 23:04:06.056882 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-07-05 23:04:06.056889 | orchestrator | Saturday 05 July 2025 23:02:22 +0000 (0:00:01.286) 0:04:28.342 ********* 2025-07-05 23:04:06.056895 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.056901 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.056908 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.056914 | orchestrator | 2025-07-05 23:04:06.056921 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-05 23:04:06.056927 | orchestrator | Saturday 05 July 2025 23:02:24 +0000 (0:00:01.721) 0:04:30.064 ********* 2025-07-05 23:04:06.056933 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.056940 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.056946 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.056953 | orchestrator | 2025-07-05 23:04:06.056959 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-05 23:04:06.056965 | orchestrator | Saturday 05 July 2025 23:02:26 +0000 (0:00:02.293) 0:04:32.358 ********* 2025-07-05 23:04:06.056972 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.056978 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.056985 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.056995 | orchestrator | 2025-07-05 23:04:06.057002 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-07-05 23:04:06.057012 | orchestrator | Saturday 05 July 2025 23:02:29 +0000 (0:00:02.939) 0:04:35.297 ********* 2025-07-05 23:04:06.057022 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.057033 | orchestrator | 2025-07-05 23:04:06.057044 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-07-05 23:04:06.057054 | orchestrator | Saturday 05 July 2025 23:02:31 +0000 (0:00:01.356) 0:04:36.654 ********* 2025-07-05 23:04:06.057095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.057109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:04:06.057120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.057170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.057215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:04:06.057224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.057245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.057260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:04:06.057289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.057310 | orchestrator | 2025-07-05 23:04:06.057317 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-07-05 23:04:06.057324 | orchestrator | Saturday 05 July 2025 23:02:34 +0000 (0:00:03.541) 0:04:40.196 ********* 2025-07-05 23:04:06.057330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.057337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:04:06.057348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.057391 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.057398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.057404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:04:06.057411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.057452 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.057464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.057470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:04:06.057477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:04:06.057495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:04:06.057502 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.057508 | orchestrator | 2025-07-05 23:04:06.057515 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-07-05 23:04:06.057522 | orchestrator | Saturday 05 July 2025 23:02:35 +0000 (0:00:00.735) 0:04:40.931 ********* 2025-07-05 23:04:06.057528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-05 23:04:06.057535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-05 23:04:06.057541 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.057566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-05 23:04:06.057576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-05 23:04:06.057583 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.057589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-05 23:04:06.057596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-05 23:04:06.057654 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.057661 | orchestrator | 2025-07-05 23:04:06.057668 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-07-05 23:04:06.057674 | orchestrator | Saturday 05 July 2025 23:02:36 +0000 (0:00:00.914) 0:04:41.846 ********* 2025-07-05 23:04:06.057681 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.057687 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.057693 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.057700 | orchestrator | 2025-07-05 23:04:06.057706 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-07-05 23:04:06.057713 | orchestrator | Saturday 05 July 2025 23:02:38 +0000 (0:00:01.796) 0:04:43.642 ********* 2025-07-05 23:04:06.057720 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.057726 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.057732 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.057739 | orchestrator | 2025-07-05 23:04:06.057746 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-07-05 23:04:06.057752 | orchestrator | Saturday 05 July 2025 23:02:40 +0000 (0:00:02.080) 0:04:45.722 ********* 2025-07-05 23:04:06.057764 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.057771 | orchestrator | 2025-07-05 23:04:06.057777 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-07-05 23:04:06.057784 | orchestrator | Saturday 05 July 2025 23:02:41 +0000 (0:00:01.348) 0:04:47.071 ********* 2025-07-05 23:04:06.057790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:04:06.057798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:04:06.057825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:04:06.057838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:04:06.057846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:04:06.057859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:04:06.057866 | orchestrator | 2025-07-05 23:04:06.057872 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-07-05 23:04:06.057879 | orchestrator | Saturday 05 July 2025 23:02:46 +0000 (0:00:04.696) 0:04:51.768 ********* 2025-07-05 23:04:06.057902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-05 23:04:06.057911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-05 23:04:06.057921 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.057927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-05 23:04:06.057933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-05 23:04:06.057939 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.058034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-05 23:04:06.058060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-05 23:04:06.058071 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.058077 | orchestrator | 2025-07-05 23:04:06.058083 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-07-05 23:04:06.058089 | orchestrator | Saturday 05 July 2025 23:02:47 +0000 (0:00:00.887) 0:04:52.655 ********* 2025-07-05 23:04:06.058095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-05 23:04:06.058101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-05 23:04:06.058107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-05 23:04:06.058113 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.058119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-05 23:04:06.058125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-05 23:04:06.058130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-05 23:04:06.058136 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.058142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-05 23:04:06.058147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-05 23:04:06.058153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-05 23:04:06.058159 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.058164 | orchestrator | 2025-07-05 23:04:06.058170 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-07-05 23:04:06.058176 | orchestrator | Saturday 05 July 2025 23:02:48 +0000 (0:00:00.884) 0:04:53.540 ********* 2025-07-05 23:04:06.058181 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.058187 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.058193 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.058203 | orchestrator | 2025-07-05 23:04:06.058213 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-07-05 23:04:06.058221 | orchestrator | Saturday 05 July 2025 23:02:48 +0000 (0:00:00.428) 0:04:53.968 ********* 2025-07-05 23:04:06.058231 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.058240 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.058250 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.058260 | orchestrator | 2025-07-05 23:04:06.058298 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-07-05 23:04:06.058305 | orchestrator | Saturday 05 July 2025 23:02:49 +0000 (0:00:01.364) 0:04:55.333 ********* 2025-07-05 23:04:06.058320 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.058326 | orchestrator | 2025-07-05 23:04:06.058332 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-07-05 23:04:06.058337 | orchestrator | Saturday 05 July 2025 23:02:51 +0000 (0:00:01.729) 0:04:57.062 ********* 2025-07-05 23:04:06.058344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-05 23:04:06.058350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:04:06.058356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-05 23:04:06.058405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:04:06.058413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-05 23:04:06.058437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:04:06.058445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-05 23:04:06.058522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-05 23:04:06.058533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-05 23:04:06.058584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-05 23:04:06.058590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-05 23:04:06.058652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-05 23:04:06.058658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058675 | orchestrator | 2025-07-05 23:04:06.058681 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-07-05 23:04:06.058687 | orchestrator | Saturday 05 July 2025 23:02:55 +0000 (0:00:04.139) 0:05:01.202 ********* 2025-07-05 23:04:06.058693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-05 23:04:06.058702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:04:06.058717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-05 23:04:06.058742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-05 23:04:06.058752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058788 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.058794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-05 23:04:06.058800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:04:06.058806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-05 23:04:06.058840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-05 23:04:06.058846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058867 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.058873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-05 23:04:06.058882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:04:06.058891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-05 23:04:06.058918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-05 23:04:06.058924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:04:06.058943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:04:06.058949 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.058955 | orchestrator | 2025-07-05 23:04:06.058960 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-07-05 23:04:06.058966 | orchestrator | Saturday 05 July 2025 23:02:57 +0000 (0:00:01.211) 0:05:02.413 ********* 2025-07-05 23:04:06.058972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-05 23:04:06.058978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-05 23:04:06.058987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-05 23:04:06.058997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-05 23:04:06.059012 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.059021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-05 23:04:06.059030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-05 23:04:06.059039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-05 23:04:06.059050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-05 23:04:06.059058 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.059064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-05 23:04:06.059069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-05 23:04:06.059075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-05 23:04:06.059084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-05 23:04:06.059090 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.059100 | orchestrator | 2025-07-05 23:04:06.059113 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-07-05 23:04:06.059123 | orchestrator | Saturday 05 July 2025 23:02:57 +0000 (0:00:00.984) 0:05:03.398 ********* 2025-07-05 23:04:06.059132 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.059141 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.059150 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.059160 | orchestrator | 2025-07-05 23:04:06.059166 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-07-05 23:04:06.059171 | orchestrator | Saturday 05 July 2025 23:02:58 +0000 (0:00:00.425) 0:05:03.823 ********* 2025-07-05 23:04:06.059177 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.059182 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.059188 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.059196 | orchestrator | 2025-07-05 23:04:06.059206 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-07-05 23:04:06.059215 | orchestrator | Saturday 05 July 2025 23:03:00 +0000 (0:00:01.658) 0:05:05.481 ********* 2025-07-05 23:04:06.059224 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.059233 | orchestrator | 2025-07-05 23:04:06.059243 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-07-05 23:04:06.059252 | orchestrator | Saturday 05 July 2025 23:03:01 +0000 (0:00:01.730) 0:05:07.212 ********* 2025-07-05 23:04:06.059268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:04:06.059275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:04:06.059281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-05 23:04:06.059289 | orchestrator | 2025-07-05 23:04:06.059303 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-07-05 23:04:06.059312 | orchestrator | Saturday 05 July 2025 23:03:04 +0000 (0:00:02.419) 0:05:09.632 ********* 2025-07-05 23:04:06.059328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-05 23:04:06.059352 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.059364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-05 23:04:06.059375 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.059385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-05 23:04:06.059395 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.059406 | orchestrator | 2025-07-05 23:04:06.059411 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-07-05 23:04:06.059420 | orchestrator | Saturday 05 July 2025 23:03:04 +0000 (0:00:00.379) 0:05:10.011 ********* 2025-07-05 23:04:06.059429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-05 23:04:06.059438 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.059447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-05 23:04:06.059456 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.059465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-05 23:04:06.059475 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.059484 | orchestrator | 2025-07-05 23:04:06.059493 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-07-05 23:04:06.059502 | orchestrator | Saturday 05 July 2025 23:03:05 +0000 (0:00:00.975) 0:05:10.987 ********* 2025-07-05 23:04:06.059516 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.059526 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.059535 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.059543 | orchestrator | 2025-07-05 23:04:06.059552 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-07-05 23:04:06.059574 | orchestrator | Saturday 05 July 2025 23:03:06 +0000 (0:00:00.460) 0:05:11.448 ********* 2025-07-05 23:04:06.059583 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.059591 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.059617 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.059627 | orchestrator | 2025-07-05 23:04:06.059636 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-07-05 23:04:06.059646 | orchestrator | Saturday 05 July 2025 23:03:07 +0000 (0:00:01.284) 0:05:12.733 ********* 2025-07-05 23:04:06.059654 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:04:06.059664 | orchestrator | 2025-07-05 23:04:06.059673 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-07-05 23:04:06.059683 | orchestrator | Saturday 05 July 2025 23:03:09 +0000 (0:00:01.731) 0:05:14.464 ********* 2025-07-05 23:04:06.059692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.059703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.059713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.059733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.059750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.059760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-05 23:04:06.059770 | orchestrator | 2025-07-05 23:04:06.059796 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-07-05 23:04:06.059806 | orchestrator | Saturday 05 July 2025 23:03:15 +0000 (0:00:06.205) 0:05:20.670 ********* 2025-07-05 23:04:06.059815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.059829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.059842 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.059851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.059857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.059863 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.059869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.059875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-05 23:04:06.059885 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.059890 | orchestrator | 2025-07-05 23:04:06.059896 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-07-05 23:04:06.059904 | orchestrator | Saturday 05 July 2025 23:03:15 +0000 (0:00:00.630) 0:05:21.301 ********* 2025-07-05 23:04:06.059913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059937 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.059942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059965 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.059971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-05 23:04:06.059994 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060000 | orchestrator | 2025-07-05 23:04:06.060006 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-07-05 23:04:06.060015 | orchestrator | Saturday 05 July 2025 23:03:17 +0000 (0:00:01.646) 0:05:22.948 ********* 2025-07-05 23:04:06.060021 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.060026 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.060032 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.060038 | orchestrator | 2025-07-05 23:04:06.060043 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-07-05 23:04:06.060049 | orchestrator | Saturday 05 July 2025 23:03:18 +0000 (0:00:01.325) 0:05:24.274 ********* 2025-07-05 23:04:06.060054 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.060060 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.060065 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.060071 | orchestrator | 2025-07-05 23:04:06.060077 | orchestrator | TASK [include_role : swift] **************************************************** 2025-07-05 23:04:06.060082 | orchestrator | Saturday 05 July 2025 23:03:20 +0000 (0:00:02.083) 0:05:26.357 ********* 2025-07-05 23:04:06.060088 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.060093 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.060099 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060104 | orchestrator | 2025-07-05 23:04:06.060110 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-07-05 23:04:06.060116 | orchestrator | Saturday 05 July 2025 23:03:21 +0000 (0:00:00.331) 0:05:26.689 ********* 2025-07-05 23:04:06.060121 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.060127 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.060132 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060138 | orchestrator | 2025-07-05 23:04:06.060144 | orchestrator | TASK [include_role : trove] **************************************************** 2025-07-05 23:04:06.060152 | orchestrator | Saturday 05 July 2025 23:03:21 +0000 (0:00:00.580) 0:05:27.269 ********* 2025-07-05 23:04:06.060158 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.060163 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.060169 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060175 | orchestrator | 2025-07-05 23:04:06.060183 | orchestrator | TASK [include_role : venus] **************************************************** 2025-07-05 23:04:06.060189 | orchestrator | Saturday 05 July 2025 23:03:22 +0000 (0:00:00.315) 0:05:27.584 ********* 2025-07-05 23:04:06.060195 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.060200 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.060206 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060211 | orchestrator | 2025-07-05 23:04:06.060217 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-07-05 23:04:06.060222 | orchestrator | Saturday 05 July 2025 23:03:22 +0000 (0:00:00.318) 0:05:27.903 ********* 2025-07-05 23:04:06.060228 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.060233 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.060239 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060244 | orchestrator | 2025-07-05 23:04:06.060250 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-07-05 23:04:06.060256 | orchestrator | Saturday 05 July 2025 23:03:22 +0000 (0:00:00.315) 0:05:28.219 ********* 2025-07-05 23:04:06.060261 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.060266 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.060272 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060277 | orchestrator | 2025-07-05 23:04:06.060283 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-07-05 23:04:06.060289 | orchestrator | Saturday 05 July 2025 23:03:23 +0000 (0:00:00.788) 0:05:29.008 ********* 2025-07-05 23:04:06.060294 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.060300 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.060306 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.060311 | orchestrator | 2025-07-05 23:04:06.060317 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-07-05 23:04:06.060326 | orchestrator | Saturday 05 July 2025 23:03:24 +0000 (0:00:00.719) 0:05:29.727 ********* 2025-07-05 23:04:06.060332 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.060338 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.060343 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.060349 | orchestrator | 2025-07-05 23:04:06.060355 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-07-05 23:04:06.060360 | orchestrator | Saturday 05 July 2025 23:03:24 +0000 (0:00:00.325) 0:05:30.053 ********* 2025-07-05 23:04:06.060366 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.060371 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.060377 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.060384 | orchestrator | 2025-07-05 23:04:06.060393 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-07-05 23:04:06.060402 | orchestrator | Saturday 05 July 2025 23:03:25 +0000 (0:00:01.148) 0:05:31.202 ********* 2025-07-05 23:04:06.060412 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.060420 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.060431 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.060443 | orchestrator | 2025-07-05 23:04:06.060451 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-07-05 23:04:06.060459 | orchestrator | Saturday 05 July 2025 23:03:26 +0000 (0:00:00.871) 0:05:32.073 ********* 2025-07-05 23:04:06.060468 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.060476 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.060483 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.060492 | orchestrator | 2025-07-05 23:04:06.060500 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-07-05 23:04:06.060508 | orchestrator | Saturday 05 July 2025 23:03:27 +0000 (0:00:00.881) 0:05:32.954 ********* 2025-07-05 23:04:06.060516 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.060524 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.060532 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.060540 | orchestrator | 2025-07-05 23:04:06.060549 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-07-05 23:04:06.060557 | orchestrator | Saturday 05 July 2025 23:03:32 +0000 (0:00:04.925) 0:05:37.880 ********* 2025-07-05 23:04:06.060566 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.060574 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.060582 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.060591 | orchestrator | 2025-07-05 23:04:06.060616 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-07-05 23:04:06.060625 | orchestrator | Saturday 05 July 2025 23:03:35 +0000 (0:00:02.822) 0:05:40.703 ********* 2025-07-05 23:04:06.060633 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.060643 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.060652 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.060660 | orchestrator | 2025-07-05 23:04:06.060669 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-07-05 23:04:06.060678 | orchestrator | Saturday 05 July 2025 23:03:48 +0000 (0:00:13.133) 0:05:53.837 ********* 2025-07-05 23:04:06.060686 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.060695 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.060704 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.060713 | orchestrator | 2025-07-05 23:04:06.060722 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-07-05 23:04:06.060731 | orchestrator | Saturday 05 July 2025 23:03:49 +0000 (0:00:00.738) 0:05:54.576 ********* 2025-07-05 23:04:06.060740 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:04:06.060749 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:04:06.060757 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:04:06.060765 | orchestrator | 2025-07-05 23:04:06.060775 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-07-05 23:04:06.060783 | orchestrator | Saturday 05 July 2025 23:03:53 +0000 (0:00:04.568) 0:05:59.144 ********* 2025-07-05 23:04:06.060792 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.060809 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.060818 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060826 | orchestrator | 2025-07-05 23:04:06.060834 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-07-05 23:04:06.060842 | orchestrator | Saturday 05 July 2025 23:03:54 +0000 (0:00:00.355) 0:05:59.500 ********* 2025-07-05 23:04:06.060851 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.060868 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.060878 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060886 | orchestrator | 2025-07-05 23:04:06.060894 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-07-05 23:04:06.060908 | orchestrator | Saturday 05 July 2025 23:03:54 +0000 (0:00:00.709) 0:06:00.209 ********* 2025-07-05 23:04:06.060917 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.060925 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.060933 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060941 | orchestrator | 2025-07-05 23:04:06.060950 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-07-05 23:04:06.060958 | orchestrator | Saturday 05 July 2025 23:03:55 +0000 (0:00:00.351) 0:06:00.561 ********* 2025-07-05 23:04:06.060966 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.060974 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.060982 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.060990 | orchestrator | 2025-07-05 23:04:06.060998 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-07-05 23:04:06.061007 | orchestrator | Saturday 05 July 2025 23:03:55 +0000 (0:00:00.354) 0:06:00.916 ********* 2025-07-05 23:04:06.061015 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.061024 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.061032 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.061040 | orchestrator | 2025-07-05 23:04:06.061048 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-07-05 23:04:06.061057 | orchestrator | Saturday 05 July 2025 23:03:55 +0000 (0:00:00.383) 0:06:01.299 ********* 2025-07-05 23:04:06.061065 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:04:06.061073 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:04:06.061082 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:04:06.061090 | orchestrator | 2025-07-05 23:04:06.061099 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-07-05 23:04:06.061107 | orchestrator | Saturday 05 July 2025 23:03:56 +0000 (0:00:00.698) 0:06:01.997 ********* 2025-07-05 23:04:06.061116 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.061125 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.061133 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.061143 | orchestrator | 2025-07-05 23:04:06.061152 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-07-05 23:04:06.061161 | orchestrator | Saturday 05 July 2025 23:04:01 +0000 (0:00:04.813) 0:06:06.811 ********* 2025-07-05 23:04:06.061170 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:04:06.061179 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:04:06.061188 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:04:06.061197 | orchestrator | 2025-07-05 23:04:06.061206 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:04:06.061213 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-05 23:04:06.061219 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-05 23:04:06.061224 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-05 23:04:06.061230 | orchestrator | 2025-07-05 23:04:06.061236 | orchestrator | 2025-07-05 23:04:06.061241 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:04:06.061255 | orchestrator | Saturday 05 July 2025 23:04:02 +0000 (0:00:00.808) 0:06:07.620 ********* 2025-07-05 23:04:06.061261 | orchestrator | =============================================================================== 2025-07-05 23:04:06.061266 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.13s 2025-07-05 23:04:06.061272 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.21s 2025-07-05 23:04:06.061278 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.48s 2025-07-05 23:04:06.061283 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.93s 2025-07-05 23:04:06.061289 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.81s 2025-07-05 23:04:06.061295 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.70s 2025-07-05 23:04:06.061300 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.64s 2025-07-05 23:04:06.061306 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.64s 2025-07-05 23:04:06.061311 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.57s 2025-07-05 23:04:06.061317 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.27s 2025-07-05 23:04:06.061323 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.26s 2025-07-05 23:04:06.061328 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.14s 2025-07-05 23:04:06.061334 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.09s 2025-07-05 23:04:06.061339 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.06s 2025-07-05 23:04:06.061345 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.92s 2025-07-05 23:04:06.061350 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.92s 2025-07-05 23:04:06.061356 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.87s 2025-07-05 23:04:06.061362 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.86s 2025-07-05 23:04:06.061367 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.64s 2025-07-05 23:04:06.061373 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.56s 2025-07-05 23:04:06.061385 | orchestrator | 2025-07-05 23:04:06 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:06.061395 | orchestrator | 2025-07-05 23:04:06 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:09.069969 | orchestrator | 2025-07-05 23:04:09 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:09.070136 | orchestrator | 2025-07-05 23:04:09 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:09.071431 | orchestrator | 2025-07-05 23:04:09 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:09.071526 | orchestrator | 2025-07-05 23:04:09 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:12.102714 | orchestrator | 2025-07-05 23:04:12 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:12.104011 | orchestrator | 2025-07-05 23:04:12 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:12.106089 | orchestrator | 2025-07-05 23:04:12 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:12.106118 | orchestrator | 2025-07-05 23:04:12 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:15.147755 | orchestrator | 2025-07-05 23:04:15 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:15.150357 | orchestrator | 2025-07-05 23:04:15 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:15.152116 | orchestrator | 2025-07-05 23:04:15 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:15.152139 | orchestrator | 2025-07-05 23:04:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:18.197069 | orchestrator | 2025-07-05 23:04:18 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:18.199913 | orchestrator | 2025-07-05 23:04:18 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:18.201061 | orchestrator | 2025-07-05 23:04:18 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:18.201143 | orchestrator | 2025-07-05 23:04:18 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:21.246396 | orchestrator | 2025-07-05 23:04:21 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:21.246934 | orchestrator | 2025-07-05 23:04:21 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:21.248833 | orchestrator | 2025-07-05 23:04:21 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:21.248892 | orchestrator | 2025-07-05 23:04:21 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:24.291117 | orchestrator | 2025-07-05 23:04:24 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:24.291223 | orchestrator | 2025-07-05 23:04:24 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:24.291239 | orchestrator | 2025-07-05 23:04:24 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:24.291251 | orchestrator | 2025-07-05 23:04:24 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:27.337108 | orchestrator | 2025-07-05 23:04:27 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:27.337685 | orchestrator | 2025-07-05 23:04:27 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:27.338797 | orchestrator | 2025-07-05 23:04:27 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:27.338820 | orchestrator | 2025-07-05 23:04:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:30.381749 | orchestrator | 2025-07-05 23:04:30 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:30.384284 | orchestrator | 2025-07-05 23:04:30 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:30.384666 | orchestrator | 2025-07-05 23:04:30 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:30.385380 | orchestrator | 2025-07-05 23:04:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:33.425315 | orchestrator | 2025-07-05 23:04:33 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:33.426371 | orchestrator | 2025-07-05 23:04:33 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:33.427855 | orchestrator | 2025-07-05 23:04:33 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:33.427899 | orchestrator | 2025-07-05 23:04:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:36.486245 | orchestrator | 2025-07-05 23:04:36 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:36.489994 | orchestrator | 2025-07-05 23:04:36 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:36.494058 | orchestrator | 2025-07-05 23:04:36 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:36.495029 | orchestrator | 2025-07-05 23:04:36 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:39.542481 | orchestrator | 2025-07-05 23:04:39 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:39.544343 | orchestrator | 2025-07-05 23:04:39 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:39.545427 | orchestrator | 2025-07-05 23:04:39 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:39.545678 | orchestrator | 2025-07-05 23:04:39 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:42.603206 | orchestrator | 2025-07-05 23:04:42 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:42.605174 | orchestrator | 2025-07-05 23:04:42 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:42.609550 | orchestrator | 2025-07-05 23:04:42 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:42.609600 | orchestrator | 2025-07-05 23:04:42 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:45.658048 | orchestrator | 2025-07-05 23:04:45 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:45.660542 | orchestrator | 2025-07-05 23:04:45 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:45.662922 | orchestrator | 2025-07-05 23:04:45 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:45.662995 | orchestrator | 2025-07-05 23:04:45 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:48.723192 | orchestrator | 2025-07-05 23:04:48 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:48.724482 | orchestrator | 2025-07-05 23:04:48 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:48.726191 | orchestrator | 2025-07-05 23:04:48 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:48.726229 | orchestrator | 2025-07-05 23:04:48 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:51.768321 | orchestrator | 2025-07-05 23:04:51 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:51.770803 | orchestrator | 2025-07-05 23:04:51 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:51.772346 | orchestrator | 2025-07-05 23:04:51 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:51.772674 | orchestrator | 2025-07-05 23:04:51 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:54.809760 | orchestrator | 2025-07-05 23:04:54 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:54.814169 | orchestrator | 2025-07-05 23:04:54 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:54.815077 | orchestrator | 2025-07-05 23:04:54 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:54.815163 | orchestrator | 2025-07-05 23:04:54 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:04:57.866764 | orchestrator | 2025-07-05 23:04:57 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:04:57.867608 | orchestrator | 2025-07-05 23:04:57 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:04:57.869203 | orchestrator | 2025-07-05 23:04:57 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:04:57.869568 | orchestrator | 2025-07-05 23:04:57 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:00.917728 | orchestrator | 2025-07-05 23:05:00 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:00.918918 | orchestrator | 2025-07-05 23:05:00 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:00.921220 | orchestrator | 2025-07-05 23:05:00 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:00.921287 | orchestrator | 2025-07-05 23:05:00 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:03.967798 | orchestrator | 2025-07-05 23:05:03 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:03.969934 | orchestrator | 2025-07-05 23:05:03 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:03.972508 | orchestrator | 2025-07-05 23:05:03 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:03.972535 | orchestrator | 2025-07-05 23:05:03 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:07.014243 | orchestrator | 2025-07-05 23:05:07 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:07.014344 | orchestrator | 2025-07-05 23:05:07 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:07.014360 | orchestrator | 2025-07-05 23:05:07 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:07.014372 | orchestrator | 2025-07-05 23:05:07 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:10.067888 | orchestrator | 2025-07-05 23:05:10 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:10.070213 | orchestrator | 2025-07-05 23:05:10 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:10.072858 | orchestrator | 2025-07-05 23:05:10 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:10.072934 | orchestrator | 2025-07-05 23:05:10 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:13.122796 | orchestrator | 2025-07-05 23:05:13 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:13.122893 | orchestrator | 2025-07-05 23:05:13 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:13.122917 | orchestrator | 2025-07-05 23:05:13 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:13.122929 | orchestrator | 2025-07-05 23:05:13 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:16.180112 | orchestrator | 2025-07-05 23:05:16 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:16.187706 | orchestrator | 2025-07-05 23:05:16 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:16.188982 | orchestrator | 2025-07-05 23:05:16 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:16.189372 | orchestrator | 2025-07-05 23:05:16 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:19.242104 | orchestrator | 2025-07-05 23:05:19 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:19.242214 | orchestrator | 2025-07-05 23:05:19 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:19.243110 | orchestrator | 2025-07-05 23:05:19 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:19.243783 | orchestrator | 2025-07-05 23:05:19 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:22.280331 | orchestrator | 2025-07-05 23:05:22 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:22.282859 | orchestrator | 2025-07-05 23:05:22 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:22.284591 | orchestrator | 2025-07-05 23:05:22 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:22.284837 | orchestrator | 2025-07-05 23:05:22 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:25.333514 | orchestrator | 2025-07-05 23:05:25 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:25.334840 | orchestrator | 2025-07-05 23:05:25 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:25.336949 | orchestrator | 2025-07-05 23:05:25 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:25.337274 | orchestrator | 2025-07-05 23:05:25 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:28.390667 | orchestrator | 2025-07-05 23:05:28 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:28.392302 | orchestrator | 2025-07-05 23:05:28 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:28.394163 | orchestrator | 2025-07-05 23:05:28 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:28.394265 | orchestrator | 2025-07-05 23:05:28 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:31.444960 | orchestrator | 2025-07-05 23:05:31 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:31.447303 | orchestrator | 2025-07-05 23:05:31 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:31.449713 | orchestrator | 2025-07-05 23:05:31 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:31.449752 | orchestrator | 2025-07-05 23:05:31 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:34.501897 | orchestrator | 2025-07-05 23:05:34 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:34.503801 | orchestrator | 2025-07-05 23:05:34 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:34.506898 | orchestrator | 2025-07-05 23:05:34 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:34.506973 | orchestrator | 2025-07-05 23:05:34 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:37.550257 | orchestrator | 2025-07-05 23:05:37 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:37.553314 | orchestrator | 2025-07-05 23:05:37 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:37.555876 | orchestrator | 2025-07-05 23:05:37 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:37.555931 | orchestrator | 2025-07-05 23:05:37 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:40.597328 | orchestrator | 2025-07-05 23:05:40 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:40.600012 | orchestrator | 2025-07-05 23:05:40 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:40.603292 | orchestrator | 2025-07-05 23:05:40 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:40.603612 | orchestrator | 2025-07-05 23:05:40 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:43.648286 | orchestrator | 2025-07-05 23:05:43 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:43.649571 | orchestrator | 2025-07-05 23:05:43 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:43.652015 | orchestrator | 2025-07-05 23:05:43 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:43.652285 | orchestrator | 2025-07-05 23:05:43 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:46.706883 | orchestrator | 2025-07-05 23:05:46 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:46.708217 | orchestrator | 2025-07-05 23:05:46 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:46.709330 | orchestrator | 2025-07-05 23:05:46 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:46.709941 | orchestrator | 2025-07-05 23:05:46 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:49.748666 | orchestrator | 2025-07-05 23:05:49 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:49.749254 | orchestrator | 2025-07-05 23:05:49 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:49.751035 | orchestrator | 2025-07-05 23:05:49 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:49.751063 | orchestrator | 2025-07-05 23:05:49 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:52.791260 | orchestrator | 2025-07-05 23:05:52 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:52.791754 | orchestrator | 2025-07-05 23:05:52 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:52.792811 | orchestrator | 2025-07-05 23:05:52 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:52.792842 | orchestrator | 2025-07-05 23:05:52 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:55.836571 | orchestrator | 2025-07-05 23:05:55 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:55.838576 | orchestrator | 2025-07-05 23:05:55 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:55.840425 | orchestrator | 2025-07-05 23:05:55 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:55.840479 | orchestrator | 2025-07-05 23:05:55 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:05:58.884500 | orchestrator | 2025-07-05 23:05:58 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:05:58.885701 | orchestrator | 2025-07-05 23:05:58 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:05:58.887453 | orchestrator | 2025-07-05 23:05:58 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:05:58.887970 | orchestrator | 2025-07-05 23:05:58 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:01.935926 | orchestrator | 2025-07-05 23:06:01 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:01.936562 | orchestrator | 2025-07-05 23:06:01 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:06:01.937663 | orchestrator | 2025-07-05 23:06:01 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:01.937746 | orchestrator | 2025-07-05 23:06:01 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:04.989434 | orchestrator | 2025-07-05 23:06:04 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:04.990247 | orchestrator | 2025-07-05 23:06:04 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:06:04.992557 | orchestrator | 2025-07-05 23:06:04 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:04.992585 | orchestrator | 2025-07-05 23:06:04 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:08.045490 | orchestrator | 2025-07-05 23:06:08 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:08.048599 | orchestrator | 2025-07-05 23:06:08 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:06:08.048692 | orchestrator | 2025-07-05 23:06:08 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:08.048706 | orchestrator | 2025-07-05 23:06:08 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:11.085308 | orchestrator | 2025-07-05 23:06:11 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:11.085656 | orchestrator | 2025-07-05 23:06:11 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:06:11.086242 | orchestrator | 2025-07-05 23:06:11 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:11.086266 | orchestrator | 2025-07-05 23:06:11 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:14.128714 | orchestrator | 2025-07-05 23:06:14 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:14.130180 | orchestrator | 2025-07-05 23:06:14 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:06:14.131925 | orchestrator | 2025-07-05 23:06:14 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:14.131964 | orchestrator | 2025-07-05 23:06:14 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:17.172619 | orchestrator | 2025-07-05 23:06:17 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:17.174148 | orchestrator | 2025-07-05 23:06:17 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state STARTED 2025-07-05 23:06:17.176030 | orchestrator | 2025-07-05 23:06:17 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:17.176167 | orchestrator | 2025-07-05 23:06:17 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:20.223209 | orchestrator | 2025-07-05 23:06:20 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:20.228263 | orchestrator | 2025-07-05 23:06:20 | INFO  | Task 657a029a-b3d3-4dd1-b55e-0be749c54a9c is in state SUCCESS 2025-07-05 23:06:20.230128 | orchestrator | 2025-07-05 23:06:20.230265 | orchestrator | 2025-07-05 23:06:20.230282 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-07-05 23:06:20.230296 | orchestrator | 2025-07-05 23:06:20.230308 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-05 23:06:20.230324 | orchestrator | Saturday 05 July 2025 22:55:13 +0000 (0:00:00.690) 0:00:00.690 ********* 2025-07-05 23:06:20.230337 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.230436 | orchestrator | 2025-07-05 23:06:20.230448 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-05 23:06:20.230459 | orchestrator | Saturday 05 July 2025 22:55:14 +0000 (0:00:01.155) 0:00:01.846 ********* 2025-07-05 23:06:20.230471 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.230500 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.230512 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.230549 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.230561 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.230572 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.230583 | orchestrator | 2025-07-05 23:06:20.230594 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-05 23:06:20.230606 | orchestrator | Saturday 05 July 2025 22:55:16 +0000 (0:00:01.639) 0:00:03.485 ********* 2025-07-05 23:06:20.230618 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.230669 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.230683 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.230695 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.230708 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.230720 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.230732 | orchestrator | 2025-07-05 23:06:20.230745 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-05 23:06:20.230758 | orchestrator | Saturday 05 July 2025 22:55:16 +0000 (0:00:00.756) 0:00:04.242 ********* 2025-07-05 23:06:20.230800 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.230813 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.230978 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.231033 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.231070 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.231083 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.231094 | orchestrator | 2025-07-05 23:06:20.231130 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-05 23:06:20.231142 | orchestrator | Saturday 05 July 2025 22:55:17 +0000 (0:00:00.928) 0:00:05.171 ********* 2025-07-05 23:06:20.231153 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.231242 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.231254 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.231265 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.231276 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.231288 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.231299 | orchestrator | 2025-07-05 23:06:20.231311 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-05 23:06:20.231322 | orchestrator | Saturday 05 July 2025 22:55:18 +0000 (0:00:00.605) 0:00:05.776 ********* 2025-07-05 23:06:20.231334 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.231345 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.231356 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.231367 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.231378 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.231389 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.231401 | orchestrator | 2025-07-05 23:06:20.231412 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-05 23:06:20.231423 | orchestrator | Saturday 05 July 2025 22:55:18 +0000 (0:00:00.510) 0:00:06.287 ********* 2025-07-05 23:06:20.231435 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.231446 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.231457 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.231468 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.231480 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.231491 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.231532 | orchestrator | 2025-07-05 23:06:20.231573 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-05 23:06:20.231587 | orchestrator | Saturday 05 July 2025 22:55:19 +0000 (0:00:00.924) 0:00:07.211 ********* 2025-07-05 23:06:20.231598 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.231611 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.231679 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.231695 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.231739 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.231751 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.231762 | orchestrator | 2025-07-05 23:06:20.231774 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-05 23:06:20.231796 | orchestrator | Saturday 05 July 2025 22:55:20 +0000 (0:00:00.773) 0:00:07.984 ********* 2025-07-05 23:06:20.231807 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.231818 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.231829 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.231840 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.231851 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.231862 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.231873 | orchestrator | 2025-07-05 23:06:20.231884 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-05 23:06:20.231895 | orchestrator | Saturday 05 July 2025 22:55:21 +0000 (0:00:00.725) 0:00:08.709 ********* 2025-07-05 23:06:20.231907 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-05 23:06:20.231918 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-05 23:06:20.231929 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-05 23:06:20.231941 | orchestrator | 2025-07-05 23:06:20.231952 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-05 23:06:20.231963 | orchestrator | Saturday 05 July 2025 22:55:22 +0000 (0:00:00.754) 0:00:09.463 ********* 2025-07-05 23:06:20.231974 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.231985 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.231996 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.232007 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.232018 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.232029 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.232040 | orchestrator | 2025-07-05 23:06:20.232067 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-05 23:06:20.232079 | orchestrator | Saturday 05 July 2025 22:55:23 +0000 (0:00:01.317) 0:00:10.781 ********* 2025-07-05 23:06:20.232091 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-05 23:06:20.232303 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-05 23:06:20.232316 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-05 23:06:20.232327 | orchestrator | 2025-07-05 23:06:20.232338 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-05 23:06:20.232349 | orchestrator | Saturday 05 July 2025 22:55:26 +0000 (0:00:02.681) 0:00:13.462 ********* 2025-07-05 23:06:20.232368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-05 23:06:20.232381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-05 23:06:20.232392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-05 23:06:20.232403 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.232415 | orchestrator | 2025-07-05 23:06:20.232426 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-05 23:06:20.232437 | orchestrator | Saturday 05 July 2025 22:55:26 +0000 (0:00:00.701) 0:00:14.164 ********* 2025-07-05 23:06:20.232450 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.232464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.232476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.232488 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.232499 | orchestrator | 2025-07-05 23:06:20.232510 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-05 23:06:20.232530 | orchestrator | Saturday 05 July 2025 22:55:27 +0000 (0:00:00.695) 0:00:14.859 ********* 2025-07-05 23:06:20.232544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.232558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.232570 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.232582 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.232593 | orchestrator | 2025-07-05 23:06:20.232702 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-05 23:06:20.232721 | orchestrator | Saturday 05 July 2025 22:55:27 +0000 (0:00:00.436) 0:00:15.296 ********* 2025-07-05 23:06:20.232802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-05 22:55:23.881542', 'end': '2025-07-05 22:55:24.175868', 'delta': '0:00:00.294326', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.232828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-05 22:55:24.825431', 'end': '2025-07-05 22:55:25.105909', 'delta': '0:00:00.280478', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.232841 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-05 22:55:25.581470', 'end': '2025-07-05 22:55:25.871227', 'delta': '0:00:00.289757', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.232862 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.232873 | orchestrator | 2025-07-05 23:06:20.232885 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-05 23:06:20.232921 | orchestrator | Saturday 05 July 2025 22:55:28 +0000 (0:00:00.251) 0:00:15.547 ********* 2025-07-05 23:06:20.232933 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.232944 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.232955 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.232966 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.233089 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.233101 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.233112 | orchestrator | 2025-07-05 23:06:20.233123 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-05 23:06:20.233134 | orchestrator | Saturday 05 July 2025 22:55:29 +0000 (0:00:01.597) 0:00:17.144 ********* 2025-07-05 23:06:20.233145 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:06:20.233157 | orchestrator | 2025-07-05 23:06:20.233168 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-05 23:06:20.233179 | orchestrator | Saturday 05 July 2025 22:55:30 +0000 (0:00:00.614) 0:00:17.759 ********* 2025-07-05 23:06:20.233190 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.233200 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.233210 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.233220 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.233230 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.233240 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.233250 | orchestrator | 2025-07-05 23:06:20.233260 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-05 23:06:20.233270 | orchestrator | Saturday 05 July 2025 22:55:31 +0000 (0:00:01.490) 0:00:19.250 ********* 2025-07-05 23:06:20.233279 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.233289 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.233299 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.233309 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.233319 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.233329 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.233339 | orchestrator | 2025-07-05 23:06:20.233348 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-05 23:06:20.233359 | orchestrator | Saturday 05 July 2025 22:55:33 +0000 (0:00:01.477) 0:00:20.727 ********* 2025-07-05 23:06:20.233369 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.233378 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.233388 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.233398 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.233408 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.233418 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.233428 | orchestrator | 2025-07-05 23:06:20.233438 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-05 23:06:20.233448 | orchestrator | Saturday 05 July 2025 22:55:34 +0000 (0:00:01.010) 0:00:21.738 ********* 2025-07-05 23:06:20.233458 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.233468 | orchestrator | 2025-07-05 23:06:20.233478 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-05 23:06:20.233488 | orchestrator | Saturday 05 July 2025 22:55:34 +0000 (0:00:00.182) 0:00:21.920 ********* 2025-07-05 23:06:20.233497 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.233507 | orchestrator | 2025-07-05 23:06:20.233517 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-05 23:06:20.233527 | orchestrator | Saturday 05 July 2025 22:55:34 +0000 (0:00:00.293) 0:00:22.214 ********* 2025-07-05 23:06:20.233537 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.233547 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.233557 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.233574 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.233584 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.233593 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.233603 | orchestrator | 2025-07-05 23:06:20.233619 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-05 23:06:20.233658 | orchestrator | Saturday 05 July 2025 22:55:35 +0000 (0:00:00.872) 0:00:23.086 ********* 2025-07-05 23:06:20.233676 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.233693 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.233709 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.233720 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.233729 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.233739 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.233749 | orchestrator | 2025-07-05 23:06:20.233759 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-05 23:06:20.233768 | orchestrator | Saturday 05 July 2025 22:55:36 +0000 (0:00:01.228) 0:00:24.315 ********* 2025-07-05 23:06:20.233778 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.233794 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.233804 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.233814 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.233824 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.233833 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.233843 | orchestrator | 2025-07-05 23:06:20.233853 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-05 23:06:20.233863 | orchestrator | Saturday 05 July 2025 22:55:37 +0000 (0:00:00.945) 0:00:25.260 ********* 2025-07-05 23:06:20.233873 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.233883 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.233892 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.233902 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.233912 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.233921 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.233931 | orchestrator | 2025-07-05 23:06:20.233941 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-05 23:06:20.233951 | orchestrator | Saturday 05 July 2025 22:55:38 +0000 (0:00:01.089) 0:00:26.350 ********* 2025-07-05 23:06:20.233961 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.233971 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.233985 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.234001 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.234063 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.234086 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.234103 | orchestrator | 2025-07-05 23:06:20.234121 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-05 23:06:20.234138 | orchestrator | Saturday 05 July 2025 22:55:39 +0000 (0:00:00.661) 0:00:27.011 ********* 2025-07-05 23:06:20.234156 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.234170 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.234180 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.234190 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.234199 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.234209 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.234220 | orchestrator | 2025-07-05 23:06:20.234230 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-05 23:06:20.234240 | orchestrator | Saturday 05 July 2025 22:55:40 +0000 (0:00:00.761) 0:00:27.773 ********* 2025-07-05 23:06:20.234250 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.234260 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.234270 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.234280 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.234289 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.234319 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.234338 | orchestrator | 2025-07-05 23:06:20.234348 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-05 23:06:20.234358 | orchestrator | Saturday 05 July 2025 22:55:41 +0000 (0:00:00.714) 0:00:28.487 ********* 2025-07-05 23:06:20.234370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8de564a6--401f--59e2--a445--234b3be175ce-osd--block--8de564a6--401f--59e2--a445--234b3be175ce', 'dm-uuid-LVM-yHtE4PzHBOsC3Ab6k4h6UunvVgRZUljOyF0P01Uq98ByQ0pAqL4fpNPtInc5P8ye'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2634d3d6--ac41--59e6--b3da--1ade7ee25156-osd--block--2634d3d6--ac41--59e6--b3da--1ade7ee25156', 'dm-uuid-LVM-HNqQQbtbb7sVYUTh3YcRap3mPYUyU3fkt8hNxUBSxMkyX8ntFOWb1kbhe9IQ0M1G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b5adb4f--945c--5107--b1d3--f691d6050e0c-osd--block--9b5adb4f--945c--5107--b1d3--f691d6050e0c', 'dm-uuid-LVM-hdeKPafsMg7oZmQmUUtbjbXxCeVfK4Fc7Tozb4P6FyGwhttWgPO1w0OMYdhIdyr0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24fdde66--e3ee--586c--8774--3b73abfeacc0-osd--block--24fdde66--e3ee--586c--8774--3b73abfeacc0', 'dm-uuid-LVM-6hql6u10Y3qbXznYPiK0N8Td6VOlEbXSl5ZAOlNelf3eImqtX1a6YOLzfgydvpBk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part1', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part14', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part15', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part16', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.234711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9b5adb4f--945c--5107--b1d3--f691d6050e0c-osd--block--9b5adb4f--945c--5107--b1d3--f691d6050e0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z4HY1o-13oW-RsRu-9VSO-ZtBJ-IIbn-oPzaDr', 'scsi-0QEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123', 'scsi-SQEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.234755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--24fdde66--e3ee--586c--8774--3b73abfeacc0-osd--block--24fdde66--e3ee--586c--8774--3b73abfeacc0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pvVztE-UAM6-eWNY-4WU1-mWtD-ELcv-vmv72r', 'scsi-0QEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc', 'scsi-SQEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.234766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--469f88b0--11f8--5147--93f6--bf0afec867dc-osd--block--469f88b0--11f8--5147--93f6--bf0afec867dc', 'dm-uuid-LVM-1H10CvQOUznXT9n1BnnWrsxAkkyW6SNNrdQn3ewRMzqpIFWJl0UCRwDdc8tONcGz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.234906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2969909f--2c17--514e--91b3--dec9da8cf58e-osd--block--2969909f--2c17--514e--91b3--dec9da8cf58e', 'dm-uuid-LVM-3YjiS71PI2OLsXeqnuBee7SEF5kpub6Hb61eTc20ueFBC0T8aPCTXo1JWz2FzmIx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.234917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b', 'scsi-SQEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.234928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8de564a6--401f--59e2--a445--234b3be175ce-osd--block--8de564a6--401f--59e2--a445--234b3be175ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-l35b8k-4jZw-AVF9-dtCV-55lc-hBW0-OsqjEI', 'scsi-0QEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f', 'scsi-SQEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.235865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.235944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.235979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.235994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2634d3d6--ac41--59e6--b3da--1ade7ee25156-osd--block--2634d3d6--ac41--59e6--b3da--1ade7ee25156'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mYe279-QSeR-9Auk-vsEd-KS7m-Nuf3-zO0Nwb', 'scsi-0QEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11', 'scsi-SQEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7', 'scsi-SQEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part1', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part14', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part15', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part16', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--469f88b0--11f8--5147--93f6--bf0afec867dc-osd--block--469f88b0--11f8--5147--93f6--bf0afec867dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MZ3H06-vyID-Ryiz-06Ik-f0Gf-PHsA-46Jjd9', 'scsi-0QEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871', 'scsi-SQEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236199 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.236217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2969909f--2c17--514e--91b3--dec9da8cf58e-osd--block--2969909f--2c17--514e--91b3--dec9da8cf58e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ehPJtv-O6yk-3p3B-PYU1-aGTG-Up2O-ffGFtu', 'scsi-0QEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c', 'scsi-SQEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c', 'scsi-SQEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca', 'scsi-SQEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part1', 'scsi-SQEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part14', 'scsi-SQEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part15', 'scsi-SQEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part16', 'scsi-SQEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236439 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.236452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b', 'scsi-SQEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part1', 'scsi-SQEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part14', 'scsi-SQEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part15', 'scsi-SQEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part16', 'scsi-SQEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236651 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.236670 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.236690 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.236714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:06:20.236845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6', 'scsi-SQEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:06:20.236899 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.236912 | orchestrator | 2025-07-05 23:06:20.236924 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-05 23:06:20.236936 | orchestrator | Saturday 05 July 2025 22:55:42 +0000 (0:00:01.558) 0:00:30.045 ********* 2025-07-05 23:06:20.236953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8de564a6--401f--59e2--a445--234b3be175ce-osd--block--8de564a6--401f--59e2--a445--234b3be175ce', 'dm-uuid-LVM-yHtE4PzHBOsC3Ab6k4h6UunvVgRZUljOyF0P01Uq98ByQ0pAqL4fpNPtInc5P8ye'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.236967 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2634d3d6--ac41--59e6--b3da--1ade7ee25156-osd--block--2634d3d6--ac41--59e6--b3da--1ade7ee25156', 'dm-uuid-LVM-HNqQQbtbb7sVYUTh3YcRap3mPYUyU3fkt8hNxUBSxMkyX8ntFOWb1kbhe9IQ0M1G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.236980 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.236992 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237029 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237045 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237058 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237070 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b5adb4f--945c--5107--b1d3--f691d6050e0c-osd--block--9b5adb4f--945c--5107--b1d3--f691d6050e0c', 'dm-uuid-LVM-hdeKPafsMg7oZmQmUUtbjbXxCeVfK4Fc7Tozb4P6FyGwhttWgPO1w0OMYdhIdyr0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24fdde66--e3ee--586c--8774--3b73abfeacc0-osd--block--24fdde66--e3ee--586c--8774--3b73abfeacc0', 'dm-uuid-LVM-6hql6u10Y3qbXznYPiK0N8Td6VOlEbXSl5ZAOlNelf3eImqtX1a6YOLzfgydvpBk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237094 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237118 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237147 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237159 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237202 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8de564a6--401f--59e2--a445--234b3be175ce-osd--block--8de564a6--401f--59e2--a445--234b3be175ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-l35b8k-4jZw-AVF9-dtCV-55lc-hBW0-OsqjEI', 'scsi-0QEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f', 'scsi-SQEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237232 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237249 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237261 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2634d3d6--ac41--59e6--b3da--1ade7ee25156-osd--block--2634d3d6--ac41--59e6--b3da--1ade7ee25156'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mYe279-QSeR-9Auk-vsEd-KS7m-Nuf3-zO0Nwb', 'scsi-0QEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11', 'scsi-SQEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237285 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7', 'scsi-SQEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237314 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237332 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part1', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part14', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part15', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part16', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237357 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237369 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9b5adb4f--945c--5107--b1d3--f691d6050e0c-osd--block--9b5adb4f--945c--5107--b1d3--f691d6050e0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z4HY1o-13oW-RsRu-9VSO-ZtBJ-IIbn-oPzaDr', 'scsi-0QEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123', 'scsi-SQEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237382 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--24fdde66--e3ee--586c--8774--3b73abfeacc0-osd--block--24fdde66--e3ee--586c--8774--3b73abfeacc0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pvVztE-UAM6-eWNY-4WU1-mWtD-ELcv-vmv72r', 'scsi-0QEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc', 'scsi-SQEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237394 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b', 'scsi-SQEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237418 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--469f88b0--11f8--5147--93f6--bf0afec867dc-osd--block--469f88b0--11f8--5147--93f6--bf0afec867dc', 'dm-uuid-LVM-1H10CvQOUznXT9n1BnnWrsxAkkyW6SNNrdQn3ewRMzqpIFWJl0UCRwDdc8tONcGz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237435 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237447 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2969909f--2c17--514e--91b3--dec9da8cf58e-osd--block--2969909f--2c17--514e--91b3--dec9da8cf58e', 'dm-uuid-LVM-3YjiS71PI2OLsXeqnuBee7SEF5kpub6Hb61eTc20ueFBC0T8aPCTXo1JWz2FzmIx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237459 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237488 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.237500 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237517 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237538 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237550 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237573 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237599 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part1', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part14', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part15', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part16', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237619 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237662 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--469f88b0--11f8--5147--93f6--bf0afec867dc-osd--block--469f88b0--11f8--5147--93f6--bf0afec867dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MZ3H06-vyID-Ryiz-06Ik-f0Gf-PHsA-46Jjd9', 'scsi-0QEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871', 'scsi-SQEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237674 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237693 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2969909f--2c17--514e--91b3--dec9da8cf58e-osd--block--2969909f--2c17--514e--91b3--dec9da8cf58e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ehPJtv-O6yk-3p3B-PYU1-aGTG-Up2O-ffGFtu', 'scsi-0QEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c', 'scsi-SQEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c', 'scsi-SQEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237880 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.237919 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238058 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238077 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238210 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238232 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238246 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca', 'scsi-SQEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part1', 'scsi-SQEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part14', 'scsi-SQEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part15', 'scsi-SQEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part16', 'scsi-SQEMU_QEMU_HARDDISK_8af9ded3-14bc-4604-a1eb-e76458d00fca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238275 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238288 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.238305 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238317 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238329 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238341 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238359 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238371 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238389 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238406 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238419 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b', 'scsi-SQEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part1', 'scsi-SQEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part14', 'scsi-SQEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part15', 'scsi-SQEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part16', 'scsi-SQEMU_QEMU_HARDDISK_8423e915-6ffe-427e-9e66-23a147af282b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238439 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238451 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.238463 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.238474 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.238492 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238509 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238521 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238539 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238551 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238563 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238582 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238599 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238612 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6', 'scsi-SQEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_ad68d6d7-9217-4c8b-8a7e-4cc3957db2f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238665 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:06:20.238678 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.238689 | orchestrator | 2025-07-05 23:06:20.238701 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-05 23:06:20.238714 | orchestrator | Saturday 05 July 2025 22:55:43 +0000 (0:00:00.996) 0:00:31.041 ********* 2025-07-05 23:06:20.238731 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.238743 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.238754 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.238765 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.238777 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.238787 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.238799 | orchestrator | 2025-07-05 23:06:20.238811 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-05 23:06:20.238825 | orchestrator | Saturday 05 July 2025 22:55:44 +0000 (0:00:01.048) 0:00:32.089 ********* 2025-07-05 23:06:20.238838 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.238851 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.238863 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.238875 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.238888 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.238900 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.238911 | orchestrator | 2025-07-05 23:06:20.238927 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-05 23:06:20.238939 | orchestrator | Saturday 05 July 2025 22:55:45 +0000 (0:00:00.953) 0:00:33.043 ********* 2025-07-05 23:06:20.238950 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.238961 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.238972 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.238989 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.239000 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.239011 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.239022 | orchestrator | 2025-07-05 23:06:20.239033 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-05 23:06:20.239045 | orchestrator | Saturday 05 July 2025 22:55:46 +0000 (0:00:00.923) 0:00:33.966 ********* 2025-07-05 23:06:20.239056 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.239066 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.239077 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.239088 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.239099 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.239110 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.239121 | orchestrator | 2025-07-05 23:06:20.239132 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-05 23:06:20.239143 | orchestrator | Saturday 05 July 2025 22:55:47 +0000 (0:00:00.992) 0:00:34.958 ********* 2025-07-05 23:06:20.239154 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.239165 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.239177 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.239188 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.239199 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.239209 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.239220 | orchestrator | 2025-07-05 23:06:20.239231 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-05 23:06:20.239243 | orchestrator | Saturday 05 July 2025 22:55:48 +0000 (0:00:01.355) 0:00:36.314 ********* 2025-07-05 23:06:20.239254 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.239265 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.239276 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.239286 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.239298 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.239308 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.239319 | orchestrator | 2025-07-05 23:06:20.239330 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-05 23:06:20.239341 | orchestrator | Saturday 05 July 2025 22:55:49 +0000 (0:00:00.871) 0:00:37.186 ********* 2025-07-05 23:06:20.239353 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-05 23:06:20.239364 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-05 23:06:20.239375 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-05 23:06:20.239386 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-05 23:06:20.239397 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-05 23:06:20.239408 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-05 23:06:20.239419 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-05 23:06:20.239430 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-05 23:06:20.239440 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-05 23:06:20.239451 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-05 23:06:20.239463 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-05 23:06:20.239473 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-07-05 23:06:20.239484 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-07-05 23:06:20.239495 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-07-05 23:06:20.239506 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-05 23:06:20.239517 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-07-05 23:06:20.239528 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-07-05 23:06:20.239539 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-07-05 23:06:20.239551 | orchestrator | 2025-07-05 23:06:20.239562 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-05 23:06:20.239579 | orchestrator | Saturday 05 July 2025 22:55:52 +0000 (0:00:02.708) 0:00:39.894 ********* 2025-07-05 23:06:20.239591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-05 23:06:20.239602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-05 23:06:20.239613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-05 23:06:20.239648 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.239660 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-05 23:06:20.239671 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-05 23:06:20.239682 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-05 23:06:20.239693 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.239704 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-05 23:06:20.239715 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-05 23:06:20.239732 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-05 23:06:20.239743 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.239754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-05 23:06:20.239765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-05 23:06:20.239776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-05 23:06:20.239787 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.239798 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-05 23:06:20.239809 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-05 23:06:20.239820 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-05 23:06:20.239831 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.239842 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-05 23:06:20.239858 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-05 23:06:20.239869 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-05 23:06:20.239880 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.239891 | orchestrator | 2025-07-05 23:06:20.239902 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-05 23:06:20.239914 | orchestrator | Saturday 05 July 2025 22:55:53 +0000 (0:00:00.898) 0:00:40.793 ********* 2025-07-05 23:06:20.239925 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.239936 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.239947 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.239958 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.239969 | orchestrator | 2025-07-05 23:06:20.239981 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-05 23:06:20.239992 | orchestrator | Saturday 05 July 2025 22:55:54 +0000 (0:00:01.366) 0:00:42.159 ********* 2025-07-05 23:06:20.240003 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.240014 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.240025 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.240036 | orchestrator | 2025-07-05 23:06:20.240047 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-05 23:06:20.240058 | orchestrator | Saturday 05 July 2025 22:55:55 +0000 (0:00:00.483) 0:00:42.643 ********* 2025-07-05 23:06:20.240069 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.240080 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.240091 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.240102 | orchestrator | 2025-07-05 23:06:20.240113 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-05 23:06:20.240124 | orchestrator | Saturday 05 July 2025 22:55:55 +0000 (0:00:00.415) 0:00:43.059 ********* 2025-07-05 23:06:20.240135 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.240146 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.240164 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.240175 | orchestrator | 2025-07-05 23:06:20.240186 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-05 23:06:20.240197 | orchestrator | Saturday 05 July 2025 22:55:55 +0000 (0:00:00.264) 0:00:43.323 ********* 2025-07-05 23:06:20.240208 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.240219 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.240231 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.240242 | orchestrator | 2025-07-05 23:06:20.240253 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-05 23:06:20.240264 | orchestrator | Saturday 05 July 2025 22:55:56 +0000 (0:00:00.422) 0:00:43.745 ********* 2025-07-05 23:06:20.240275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.240286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.240297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.240309 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.240320 | orchestrator | 2025-07-05 23:06:20.240331 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-05 23:06:20.240342 | orchestrator | Saturday 05 July 2025 22:55:56 +0000 (0:00:00.571) 0:00:44.316 ********* 2025-07-05 23:06:20.240353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.240364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.240375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.240386 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.240397 | orchestrator | 2025-07-05 23:06:20.240408 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-05 23:06:20.240419 | orchestrator | Saturday 05 July 2025 22:55:57 +0000 (0:00:00.525) 0:00:44.842 ********* 2025-07-05 23:06:20.240430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.240441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.240452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.240463 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.240474 | orchestrator | 2025-07-05 23:06:20.240485 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-05 23:06:20.240496 | orchestrator | Saturday 05 July 2025 22:55:57 +0000 (0:00:00.393) 0:00:45.235 ********* 2025-07-05 23:06:20.240508 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.240519 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.240530 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.240541 | orchestrator | 2025-07-05 23:06:20.240552 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-05 23:06:20.240563 | orchestrator | Saturday 05 July 2025 22:55:58 +0000 (0:00:00.469) 0:00:45.705 ********* 2025-07-05 23:06:20.240574 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-05 23:06:20.240585 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-05 23:06:20.240596 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-05 23:06:20.240607 | orchestrator | 2025-07-05 23:06:20.240649 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-05 23:06:20.240671 | orchestrator | Saturday 05 July 2025 22:55:58 +0000 (0:00:00.588) 0:00:46.293 ********* 2025-07-05 23:06:20.240690 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-05 23:06:20.240709 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-05 23:06:20.240728 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-05 23:06:20.240740 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-05 23:06:20.240751 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-05 23:06:20.240762 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-05 23:06:20.240786 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-05 23:06:20.240797 | orchestrator | 2025-07-05 23:06:20.240808 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-05 23:06:20.240819 | orchestrator | Saturday 05 July 2025 22:55:59 +0000 (0:00:00.925) 0:00:47.219 ********* 2025-07-05 23:06:20.240830 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-05 23:06:20.240841 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-05 23:06:20.240851 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-05 23:06:20.240862 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-05 23:06:20.240873 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-05 23:06:20.240884 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-05 23:06:20.240895 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-05 23:06:20.240906 | orchestrator | 2025-07-05 23:06:20.240917 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-05 23:06:20.240927 | orchestrator | Saturday 05 July 2025 22:56:01 +0000 (0:00:02.052) 0:00:49.271 ********* 2025-07-05 23:06:20.240939 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.240950 | orchestrator | 2025-07-05 23:06:20.240961 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-05 23:06:20.240971 | orchestrator | Saturday 05 July 2025 22:56:03 +0000 (0:00:01.592) 0:00:50.864 ********* 2025-07-05 23:06:20.240982 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.240994 | orchestrator | 2025-07-05 23:06:20.241005 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-05 23:06:20.241015 | orchestrator | Saturday 05 July 2025 22:56:04 +0000 (0:00:01.304) 0:00:52.169 ********* 2025-07-05 23:06:20.241026 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.241037 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.241049 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.241060 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.241071 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.241082 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.241093 | orchestrator | 2025-07-05 23:06:20.241104 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-05 23:06:20.241115 | orchestrator | Saturday 05 July 2025 22:56:06 +0000 (0:00:01.536) 0:00:53.705 ********* 2025-07-05 23:06:20.241126 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.241137 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.241148 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.241159 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.241170 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.241181 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.241192 | orchestrator | 2025-07-05 23:06:20.241203 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-05 23:06:20.241214 | orchestrator | Saturday 05 July 2025 22:56:07 +0000 (0:00:01.008) 0:00:54.714 ********* 2025-07-05 23:06:20.241225 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.241236 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.241247 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.241258 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.241269 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.241281 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.241292 | orchestrator | 2025-07-05 23:06:20.241310 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-05 23:06:20.241321 | orchestrator | Saturday 05 July 2025 22:56:08 +0000 (0:00:01.131) 0:00:55.846 ********* 2025-07-05 23:06:20.241333 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.241343 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.241354 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.241366 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.241376 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.241387 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.241398 | orchestrator | 2025-07-05 23:06:20.241409 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-05 23:06:20.241420 | orchestrator | Saturday 05 July 2025 22:56:09 +0000 (0:00:00.927) 0:00:56.774 ********* 2025-07-05 23:06:20.241431 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.241442 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.241453 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.241464 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.241475 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.241486 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.241497 | orchestrator | 2025-07-05 23:06:20.241508 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-05 23:06:20.241526 | orchestrator | Saturday 05 July 2025 22:56:10 +0000 (0:00:01.465) 0:00:58.239 ********* 2025-07-05 23:06:20.241538 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.241549 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.241560 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.241571 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.241582 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.241593 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.241604 | orchestrator | 2025-07-05 23:06:20.241615 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-05 23:06:20.241667 | orchestrator | Saturday 05 July 2025 22:56:11 +0000 (0:00:00.696) 0:00:58.936 ********* 2025-07-05 23:06:20.241680 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.241691 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.241702 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.241719 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.241730 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.241741 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.241752 | orchestrator | 2025-07-05 23:06:20.241763 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-05 23:06:20.241775 | orchestrator | Saturday 05 July 2025 22:56:12 +0000 (0:00:00.976) 0:00:59.912 ********* 2025-07-05 23:06:20.241785 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.241797 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.241808 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.241819 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.241829 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.241841 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.241852 | orchestrator | 2025-07-05 23:06:20.241863 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-05 23:06:20.241874 | orchestrator | Saturday 05 July 2025 22:56:13 +0000 (0:00:01.463) 0:01:01.376 ********* 2025-07-05 23:06:20.241885 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.241896 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.241906 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.241917 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.241928 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.241939 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.241950 | orchestrator | 2025-07-05 23:06:20.241961 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-05 23:06:20.241972 | orchestrator | Saturday 05 July 2025 22:56:16 +0000 (0:00:02.158) 0:01:03.535 ********* 2025-07-05 23:06:20.241983 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.242001 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.242012 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.242055 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.242069 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.242080 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.242092 | orchestrator | 2025-07-05 23:06:20.242103 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-05 23:06:20.242114 | orchestrator | Saturday 05 July 2025 22:56:17 +0000 (0:00:01.068) 0:01:04.604 ********* 2025-07-05 23:06:20.242125 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.242136 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.242147 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.242158 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.242169 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.242180 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.242191 | orchestrator | 2025-07-05 23:06:20.242203 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-05 23:06:20.242216 | orchestrator | Saturday 05 July 2025 22:56:18 +0000 (0:00:01.057) 0:01:05.661 ********* 2025-07-05 23:06:20.242235 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.242264 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.242282 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.242298 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.242314 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.242331 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.242347 | orchestrator | 2025-07-05 23:06:20.242365 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-05 23:06:20.242386 | orchestrator | Saturday 05 July 2025 22:56:18 +0000 (0:00:00.735) 0:01:06.397 ********* 2025-07-05 23:06:20.242403 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.242422 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.242440 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.242458 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.242476 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.242494 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.242511 | orchestrator | 2025-07-05 23:06:20.242522 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-05 23:06:20.242533 | orchestrator | Saturday 05 July 2025 22:56:19 +0000 (0:00:00.942) 0:01:07.340 ********* 2025-07-05 23:06:20.242544 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.242555 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.242566 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.242577 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.242588 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.242599 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.242610 | orchestrator | 2025-07-05 23:06:20.242647 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-05 23:06:20.242661 | orchestrator | Saturday 05 July 2025 22:56:20 +0000 (0:00:00.946) 0:01:08.286 ********* 2025-07-05 23:06:20.242672 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.242683 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.242694 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.242705 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.242716 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.242727 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.242738 | orchestrator | 2025-07-05 23:06:20.242749 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-05 23:06:20.242760 | orchestrator | Saturday 05 July 2025 22:56:22 +0000 (0:00:01.343) 0:01:09.630 ********* 2025-07-05 23:06:20.242771 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.242782 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.242792 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.242803 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.242814 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.242837 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.242849 | orchestrator | 2025-07-05 23:06:20.242880 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-05 23:06:20.242892 | orchestrator | Saturday 05 July 2025 22:56:22 +0000 (0:00:00.659) 0:01:10.290 ********* 2025-07-05 23:06:20.242903 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.242914 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.242925 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.242936 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.242947 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.242958 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.242969 | orchestrator | 2025-07-05 23:06:20.242980 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-05 23:06:20.242991 | orchestrator | Saturday 05 July 2025 22:56:23 +0000 (0:00:00.782) 0:01:11.073 ********* 2025-07-05 23:06:20.243002 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.243013 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.243024 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.243043 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.243054 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.243065 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.243076 | orchestrator | 2025-07-05 23:06:20.243087 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-05 23:06:20.243098 | orchestrator | Saturday 05 July 2025 22:56:24 +0000 (0:00:00.629) 0:01:11.703 ********* 2025-07-05 23:06:20.243109 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.243120 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.243131 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.243141 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.243152 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.243163 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.243174 | orchestrator | 2025-07-05 23:06:20.243185 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-07-05 23:06:20.243197 | orchestrator | Saturday 05 July 2025 22:56:25 +0000 (0:00:01.366) 0:01:13.069 ********* 2025-07-05 23:06:20.243208 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.243219 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.243230 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.243241 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.243252 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.243263 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.243273 | orchestrator | 2025-07-05 23:06:20.243285 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-07-05 23:06:20.243296 | orchestrator | Saturday 05 July 2025 22:56:27 +0000 (0:00:01.697) 0:01:14.767 ********* 2025-07-05 23:06:20.243307 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.243318 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.243328 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.243340 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.243365 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.243377 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.243398 | orchestrator | 2025-07-05 23:06:20.243410 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-07-05 23:06:20.243421 | orchestrator | Saturday 05 July 2025 22:56:29 +0000 (0:00:01.948) 0:01:16.716 ********* 2025-07-05 23:06:20.243432 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.243444 | orchestrator | 2025-07-05 23:06:20.243455 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-07-05 23:06:20.243466 | orchestrator | Saturday 05 July 2025 22:56:30 +0000 (0:00:01.163) 0:01:17.879 ********* 2025-07-05 23:06:20.243477 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.243488 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.243499 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.243517 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.243528 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.243539 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.243550 | orchestrator | 2025-07-05 23:06:20.243561 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-07-05 23:06:20.243572 | orchestrator | Saturday 05 July 2025 22:56:31 +0000 (0:00:00.772) 0:01:18.652 ********* 2025-07-05 23:06:20.243583 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.243594 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.243605 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.243616 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.243653 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.243665 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.243676 | orchestrator | 2025-07-05 23:06:20.243687 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-07-05 23:06:20.243698 | orchestrator | Saturday 05 July 2025 22:56:31 +0000 (0:00:00.547) 0:01:19.199 ********* 2025-07-05 23:06:20.243709 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-05 23:06:20.243720 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-05 23:06:20.243731 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-05 23:06:20.243742 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-05 23:06:20.243754 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-05 23:06:20.243765 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-05 23:06:20.243776 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-05 23:06:20.243787 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-05 23:06:20.243798 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-05 23:06:20.243809 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-05 23:06:20.243820 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-05 23:06:20.243839 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-05 23:06:20.243850 | orchestrator | 2025-07-05 23:06:20.243861 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-07-05 23:06:20.243873 | orchestrator | Saturday 05 July 2025 22:56:33 +0000 (0:00:01.480) 0:01:20.679 ********* 2025-07-05 23:06:20.243884 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.243895 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.243906 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.243917 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.243928 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.243939 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.243950 | orchestrator | 2025-07-05 23:06:20.243961 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-07-05 23:06:20.243977 | orchestrator | Saturday 05 July 2025 22:56:34 +0000 (0:00:00.896) 0:01:21.576 ********* 2025-07-05 23:06:20.243989 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.244000 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.244010 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.244021 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.244032 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.244043 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.244054 | orchestrator | 2025-07-05 23:06:20.244065 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-07-05 23:06:20.244076 | orchestrator | Saturday 05 July 2025 22:56:34 +0000 (0:00:00.801) 0:01:22.378 ********* 2025-07-05 23:06:20.244098 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.244110 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.244120 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.244131 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.244142 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.244153 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.244164 | orchestrator | 2025-07-05 23:06:20.244175 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-07-05 23:06:20.244186 | orchestrator | Saturday 05 July 2025 22:56:35 +0000 (0:00:00.619) 0:01:22.998 ********* 2025-07-05 23:06:20.244197 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.244207 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.244218 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.244229 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.244240 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.244251 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.244261 | orchestrator | 2025-07-05 23:06:20.244272 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-07-05 23:06:20.244284 | orchestrator | Saturday 05 July 2025 22:56:36 +0000 (0:00:00.812) 0:01:23.810 ********* 2025-07-05 23:06:20.244295 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.244306 | orchestrator | 2025-07-05 23:06:20.244317 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-07-05 23:06:20.244328 | orchestrator | Saturday 05 July 2025 22:56:37 +0000 (0:00:01.130) 0:01:24.941 ********* 2025-07-05 23:06:20.244340 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.244351 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.244362 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.244373 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.244384 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.244395 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.244405 | orchestrator | 2025-07-05 23:06:20.244417 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-07-05 23:06:20.244428 | orchestrator | Saturday 05 July 2025 22:57:44 +0000 (0:01:07.068) 0:02:32.010 ********* 2025-07-05 23:06:20.244439 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-05 23:06:20.244450 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-05 23:06:20.244461 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-05 23:06:20.244472 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.244483 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-05 23:06:20.244494 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-05 23:06:20.244505 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-05 23:06:20.244516 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.244527 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-05 23:06:20.244538 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-05 23:06:20.244549 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-05 23:06:20.244560 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.244571 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-05 23:06:20.244582 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-05 23:06:20.244593 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-05 23:06:20.244604 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.244615 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-05 23:06:20.244659 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-05 23:06:20.244680 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-05 23:06:20.244698 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.244715 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-05 23:06:20.244733 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-05 23:06:20.244744 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-05 23:06:20.244755 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.244766 | orchestrator | 2025-07-05 23:06:20.244777 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-07-05 23:06:20.244788 | orchestrator | Saturday 05 July 2025 22:57:45 +0000 (0:00:00.839) 0:02:32.850 ********* 2025-07-05 23:06:20.244799 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.244810 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.244821 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.244832 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.244843 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.244854 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.244865 | orchestrator | 2025-07-05 23:06:20.244876 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-07-05 23:06:20.244894 | orchestrator | Saturday 05 July 2025 22:57:46 +0000 (0:00:00.618) 0:02:33.468 ********* 2025-07-05 23:06:20.244906 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.244916 | orchestrator | 2025-07-05 23:06:20.244927 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-07-05 23:06:20.244939 | orchestrator | Saturday 05 July 2025 22:57:46 +0000 (0:00:00.140) 0:02:33.609 ********* 2025-07-05 23:06:20.244949 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.244961 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.244972 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.244982 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.244993 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.245004 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.245015 | orchestrator | 2025-07-05 23:06:20.245026 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-07-05 23:06:20.245037 | orchestrator | Saturday 05 July 2025 22:57:46 +0000 (0:00:00.808) 0:02:34.417 ********* 2025-07-05 23:06:20.245048 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.245059 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.245070 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.245081 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.245092 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.245103 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.245113 | orchestrator | 2025-07-05 23:06:20.245125 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-07-05 23:06:20.245136 | orchestrator | Saturday 05 July 2025 22:57:47 +0000 (0:00:00.615) 0:02:35.032 ********* 2025-07-05 23:06:20.245147 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.245157 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.245168 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.245179 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.245190 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.245201 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.245211 | orchestrator | 2025-07-05 23:06:20.245223 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-07-05 23:06:20.245233 | orchestrator | Saturday 05 July 2025 22:57:48 +0000 (0:00:00.817) 0:02:35.850 ********* 2025-07-05 23:06:20.245244 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.245255 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.245266 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.245277 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.245296 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.245307 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.245318 | orchestrator | 2025-07-05 23:06:20.245329 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-07-05 23:06:20.245340 | orchestrator | Saturday 05 July 2025 22:57:50 +0000 (0:00:01.847) 0:02:37.698 ********* 2025-07-05 23:06:20.245351 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.245362 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.245373 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.245384 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.245395 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.245406 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.245416 | orchestrator | 2025-07-05 23:06:20.245427 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-07-05 23:06:20.245439 | orchestrator | Saturday 05 July 2025 22:57:51 +0000 (0:00:00.851) 0:02:38.550 ********* 2025-07-05 23:06:20.245450 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.245462 | orchestrator | 2025-07-05 23:06:20.245473 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-07-05 23:06:20.245484 | orchestrator | Saturday 05 July 2025 22:57:52 +0000 (0:00:01.209) 0:02:39.760 ********* 2025-07-05 23:06:20.245495 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.245506 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.245517 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.245528 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.245539 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.245550 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.245561 | orchestrator | 2025-07-05 23:06:20.245572 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-07-05 23:06:20.245583 | orchestrator | Saturday 05 July 2025 22:57:52 +0000 (0:00:00.622) 0:02:40.382 ********* 2025-07-05 23:06:20.245594 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.245605 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.245616 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.245687 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.245700 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.245711 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.245722 | orchestrator | 2025-07-05 23:06:20.245733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-07-05 23:06:20.245745 | orchestrator | Saturday 05 July 2025 22:57:53 +0000 (0:00:00.846) 0:02:41.229 ********* 2025-07-05 23:06:20.245756 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.245767 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.245778 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.245789 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.245800 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.245816 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.245826 | orchestrator | 2025-07-05 23:06:20.245836 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-07-05 23:06:20.245846 | orchestrator | Saturday 05 July 2025 22:57:54 +0000 (0:00:00.606) 0:02:41.835 ********* 2025-07-05 23:06:20.245856 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.245865 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.245875 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.245885 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.245895 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.245905 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.245915 | orchestrator | 2025-07-05 23:06:20.245925 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-07-05 23:06:20.245934 | orchestrator | Saturday 05 July 2025 22:57:55 +0000 (0:00:00.940) 0:02:42.776 ********* 2025-07-05 23:06:20.245952 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.245967 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.245977 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.245987 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.245997 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.246006 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.246050 | orchestrator | 2025-07-05 23:06:20.246062 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-07-05 23:06:20.246074 | orchestrator | Saturday 05 July 2025 22:57:55 +0000 (0:00:00.662) 0:02:43.438 ********* 2025-07-05 23:06:20.246084 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.246094 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.246103 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.246113 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.246124 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.246133 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.246143 | orchestrator | 2025-07-05 23:06:20.246153 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-07-05 23:06:20.246163 | orchestrator | Saturday 05 July 2025 22:57:56 +0000 (0:00:00.871) 0:02:44.309 ********* 2025-07-05 23:06:20.246173 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.246183 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.246193 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.246202 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.246212 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.246222 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.246231 | orchestrator | 2025-07-05 23:06:20.246241 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-07-05 23:06:20.246251 | orchestrator | Saturday 05 July 2025 22:57:57 +0000 (0:00:00.640) 0:02:44.950 ********* 2025-07-05 23:06:20.246261 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.246271 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.246281 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.246290 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.246300 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.246310 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.246319 | orchestrator | 2025-07-05 23:06:20.246329 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-07-05 23:06:20.246339 | orchestrator | Saturday 05 July 2025 22:57:58 +0000 (0:00:00.639) 0:02:45.589 ********* 2025-07-05 23:06:20.246350 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.246360 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.246369 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.246379 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.246389 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.246399 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.246408 | orchestrator | 2025-07-05 23:06:20.246418 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-07-05 23:06:20.246428 | orchestrator | Saturday 05 July 2025 22:57:59 +0000 (0:00:00.980) 0:02:46.569 ********* 2025-07-05 23:06:20.246438 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.246448 | orchestrator | 2025-07-05 23:06:20.246458 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-07-05 23:06:20.246467 | orchestrator | Saturday 05 July 2025 22:58:00 +0000 (0:00:00.928) 0:02:47.498 ********* 2025-07-05 23:06:20.246477 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-07-05 23:06:20.246487 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-07-05 23:06:20.246497 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-07-05 23:06:20.246507 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-07-05 23:06:20.246521 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-07-05 23:06:20.246555 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-07-05 23:06:20.246578 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-07-05 23:06:20.246592 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-07-05 23:06:20.246607 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-07-05 23:06:20.246642 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-07-05 23:06:20.246659 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-07-05 23:06:20.246674 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-07-05 23:06:20.246689 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-07-05 23:06:20.246704 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-07-05 23:06:20.246720 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-07-05 23:06:20.246738 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-07-05 23:06:20.246754 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-07-05 23:06:20.246771 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-07-05 23:06:20.246789 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-07-05 23:06:20.246804 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-07-05 23:06:20.246837 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-07-05 23:06:20.246848 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-07-05 23:06:20.246858 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-07-05 23:06:20.246868 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-07-05 23:06:20.246878 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-07-05 23:06:20.246888 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-07-05 23:06:20.246897 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-07-05 23:06:20.246907 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-07-05 23:06:20.246917 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-07-05 23:06:20.246934 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-07-05 23:06:20.246944 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-07-05 23:06:20.246954 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-07-05 23:06:20.246964 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-07-05 23:06:20.246974 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-07-05 23:06:20.246983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-07-05 23:06:20.246993 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-07-05 23:06:20.247003 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-07-05 23:06:20.247013 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-07-05 23:06:20.247022 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-07-05 23:06:20.247032 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-07-05 23:06:20.247042 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-07-05 23:06:20.247052 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-07-05 23:06:20.247062 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-07-05 23:06:20.247071 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-05 23:06:20.247081 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-07-05 23:06:20.247091 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-05 23:06:20.247100 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-07-05 23:06:20.247110 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-07-05 23:06:20.247120 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-05 23:06:20.247139 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-07-05 23:06:20.247149 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-05 23:06:20.247159 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-05 23:06:20.247169 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-05 23:06:20.247178 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-05 23:06:20.247188 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-05 23:06:20.247198 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-05 23:06:20.247207 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-05 23:06:20.247217 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-07-05 23:06:20.247227 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-05 23:06:20.247237 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-05 23:06:20.247246 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-05 23:06:20.247256 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-05 23:06:20.247266 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-05 23:06:20.247275 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-05 23:06:20.247285 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-05 23:06:20.247295 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-05 23:06:20.247304 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-05 23:06:20.247314 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-05 23:06:20.247324 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-05 23:06:20.247334 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-05 23:06:20.247344 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-05 23:06:20.247354 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-05 23:06:20.247364 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-05 23:06:20.247374 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-05 23:06:20.247383 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-05 23:06:20.247393 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-05 23:06:20.247403 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-05 23:06:20.247413 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-07-05 23:06:20.247429 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-05 23:06:20.247440 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-07-05 23:06:20.247450 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-05 23:06:20.247459 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-05 23:06:20.247469 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-05 23:06:20.247479 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-07-05 23:06:20.247489 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-05 23:06:20.247499 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-07-05 23:06:20.247514 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-05 23:06:20.247524 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-07-05 23:06:20.247540 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-07-05 23:06:20.247550 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-07-05 23:06:20.247561 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-05 23:06:20.247570 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-07-05 23:06:20.247580 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-07-05 23:06:20.247590 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-07-05 23:06:20.247600 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-07-05 23:06:20.247610 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-07-05 23:06:20.247620 | orchestrator | 2025-07-05 23:06:20.247655 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-07-05 23:06:20.247666 | orchestrator | Saturday 05 July 2025 22:58:06 +0000 (0:00:06.035) 0:02:53.534 ********* 2025-07-05 23:06:20.247676 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.247686 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.247696 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.247707 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.247717 | orchestrator | 2025-07-05 23:06:20.247727 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-07-05 23:06:20.247737 | orchestrator | Saturday 05 July 2025 22:58:07 +0000 (0:00:01.006) 0:02:54.540 ********* 2025-07-05 23:06:20.247747 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.247757 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.247767 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.247777 | orchestrator | 2025-07-05 23:06:20.247787 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-07-05 23:06:20.247797 | orchestrator | Saturday 05 July 2025 22:58:07 +0000 (0:00:00.795) 0:02:55.335 ********* 2025-07-05 23:06:20.247807 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.247817 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.247828 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.247837 | orchestrator | 2025-07-05 23:06:20.247847 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-07-05 23:06:20.247857 | orchestrator | Saturday 05 July 2025 22:58:09 +0000 (0:00:01.347) 0:02:56.683 ********* 2025-07-05 23:06:20.247867 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.247877 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.247887 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.247897 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.247906 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.247916 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.247926 | orchestrator | 2025-07-05 23:06:20.247936 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-07-05 23:06:20.247946 | orchestrator | Saturday 05 July 2025 22:58:09 +0000 (0:00:00.550) 0:02:57.233 ********* 2025-07-05 23:06:20.247955 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.247965 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.247975 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.247985 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.247995 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.248004 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.248021 | orchestrator | 2025-07-05 23:06:20.248031 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-07-05 23:06:20.248041 | orchestrator | Saturday 05 July 2025 22:58:10 +0000 (0:00:00.686) 0:02:57.920 ********* 2025-07-05 23:06:20.248051 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.248061 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.248071 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.248080 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.248145 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.248157 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.248167 | orchestrator | 2025-07-05 23:06:20.248177 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-07-05 23:06:20.248187 | orchestrator | Saturday 05 July 2025 22:58:11 +0000 (0:00:00.607) 0:02:58.527 ********* 2025-07-05 23:06:20.248204 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.248215 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.248225 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.248234 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.248244 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.248254 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.248264 | orchestrator | 2025-07-05 23:06:20.248274 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-07-05 23:06:20.248284 | orchestrator | Saturday 05 July 2025 22:58:11 +0000 (0:00:00.785) 0:02:59.313 ********* 2025-07-05 23:06:20.248294 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.248304 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.248313 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.248323 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.248333 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.248348 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.248358 | orchestrator | 2025-07-05 23:06:20.248368 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-07-05 23:06:20.248378 | orchestrator | Saturday 05 July 2025 22:58:12 +0000 (0:00:00.641) 0:02:59.954 ********* 2025-07-05 23:06:20.248388 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.248397 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.248407 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.248417 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.248427 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.248437 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.248447 | orchestrator | 2025-07-05 23:06:20.248457 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-07-05 23:06:20.248467 | orchestrator | Saturday 05 July 2025 22:58:13 +0000 (0:00:00.839) 0:03:00.794 ********* 2025-07-05 23:06:20.248476 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.248486 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.248496 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.248506 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.248515 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.248525 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.248534 | orchestrator | 2025-07-05 23:06:20.248544 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-07-05 23:06:20.248554 | orchestrator | Saturday 05 July 2025 22:58:14 +0000 (0:00:00.778) 0:03:01.572 ********* 2025-07-05 23:06:20.248564 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.248574 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.248584 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.248593 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.248603 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.248613 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.248685 | orchestrator | 2025-07-05 23:06:20.248699 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-07-05 23:06:20.248718 | orchestrator | Saturday 05 July 2025 22:58:14 +0000 (0:00:00.757) 0:03:02.330 ********* 2025-07-05 23:06:20.248728 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.248737 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.248747 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.248757 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.248767 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.248777 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.248786 | orchestrator | 2025-07-05 23:06:20.248796 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-07-05 23:06:20.248806 | orchestrator | Saturday 05 July 2025 22:58:18 +0000 (0:00:03.120) 0:03:05.450 ********* 2025-07-05 23:06:20.248816 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.248826 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.248836 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.248845 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.248855 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.248865 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.248875 | orchestrator | 2025-07-05 23:06:20.248885 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-07-05 23:06:20.248895 | orchestrator | Saturday 05 July 2025 22:58:18 +0000 (0:00:00.732) 0:03:06.183 ********* 2025-07-05 23:06:20.248905 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.248915 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.248924 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.248934 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.248944 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.248953 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.248963 | orchestrator | 2025-07-05 23:06:20.248973 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-07-05 23:06:20.248983 | orchestrator | Saturday 05 July 2025 22:58:19 +0000 (0:00:00.548) 0:03:06.731 ********* 2025-07-05 23:06:20.248993 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.249003 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.249012 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.249022 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.249032 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.249041 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.249051 | orchestrator | 2025-07-05 23:06:20.249061 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-07-05 23:06:20.249071 | orchestrator | Saturday 05 July 2025 22:58:19 +0000 (0:00:00.671) 0:03:07.403 ********* 2025-07-05 23:06:20.249081 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.249091 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.249101 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.249111 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.249121 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.249131 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.249140 | orchestrator | 2025-07-05 23:06:20.249156 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-07-05 23:06:20.249167 | orchestrator | Saturday 05 July 2025 22:58:20 +0000 (0:00:00.676) 0:03:08.080 ********* 2025-07-05 23:06:20.249179 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-07-05 23:06:20.249198 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-07-05 23:06:20.249219 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.249228 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-07-05 23:06:20.249236 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-07-05 23:06:20.249245 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-07-05 23:06:20.249253 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-07-05 23:06:20.249261 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.249270 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.249278 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.249286 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.249294 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.249302 | orchestrator | 2025-07-05 23:06:20.249310 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-07-05 23:06:20.249318 | orchestrator | Saturday 05 July 2025 22:58:21 +0000 (0:00:00.899) 0:03:08.979 ********* 2025-07-05 23:06:20.249326 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.249334 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.249342 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.249350 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.249358 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.249367 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.249374 | orchestrator | 2025-07-05 23:06:20.249383 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-07-05 23:06:20.249390 | orchestrator | Saturday 05 July 2025 22:58:22 +0000 (0:00:00.612) 0:03:09.592 ********* 2025-07-05 23:06:20.249399 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.249407 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.249415 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.249423 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.249431 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.249439 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.249446 | orchestrator | 2025-07-05 23:06:20.249455 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-05 23:06:20.249463 | orchestrator | Saturday 05 July 2025 22:58:22 +0000 (0:00:00.841) 0:03:10.433 ********* 2025-07-05 23:06:20.249471 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.249479 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.249487 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.249495 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.249503 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.249511 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.249525 | orchestrator | 2025-07-05 23:06:20.249533 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-05 23:06:20.249541 | orchestrator | Saturday 05 July 2025 22:58:23 +0000 (0:00:00.685) 0:03:11.119 ********* 2025-07-05 23:06:20.249549 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.249557 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.249565 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.249573 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.249581 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.249589 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.249597 | orchestrator | 2025-07-05 23:06:20.249606 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-05 23:06:20.249614 | orchestrator | Saturday 05 July 2025 22:58:24 +0000 (0:00:00.823) 0:03:11.942 ********* 2025-07-05 23:06:20.249647 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.249662 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.249670 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.249683 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.249697 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.249713 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.249732 | orchestrator | 2025-07-05 23:06:20.249748 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-05 23:06:20.249761 | orchestrator | Saturday 05 July 2025 22:58:25 +0000 (0:00:00.649) 0:03:12.592 ********* 2025-07-05 23:06:20.249773 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.249785 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.249797 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.249810 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.249823 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.249835 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.249849 | orchestrator | 2025-07-05 23:06:20.249879 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-05 23:06:20.249893 | orchestrator | Saturday 05 July 2025 22:58:26 +0000 (0:00:00.984) 0:03:13.577 ********* 2025-07-05 23:06:20.249906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.249920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.249934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.249947 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.249960 | orchestrator | 2025-07-05 23:06:20.249974 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-05 23:06:20.249988 | orchestrator | Saturday 05 July 2025 22:58:26 +0000 (0:00:00.446) 0:03:14.024 ********* 2025-07-05 23:06:20.250001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.250118 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.250137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.250145 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.250153 | orchestrator | 2025-07-05 23:06:20.250161 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-05 23:06:20.250170 | orchestrator | Saturday 05 July 2025 22:58:26 +0000 (0:00:00.414) 0:03:14.438 ********* 2025-07-05 23:06:20.250178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.250186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.250194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.250202 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.250210 | orchestrator | 2025-07-05 23:06:20.250218 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-05 23:06:20.250226 | orchestrator | Saturday 05 July 2025 22:58:27 +0000 (0:00:00.408) 0:03:14.847 ********* 2025-07-05 23:06:20.250235 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.250243 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.250263 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.250272 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.250280 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.250288 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.250296 | orchestrator | 2025-07-05 23:06:20.250304 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-05 23:06:20.250312 | orchestrator | Saturday 05 July 2025 22:58:28 +0000 (0:00:00.760) 0:03:15.607 ********* 2025-07-05 23:06:20.250320 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-05 23:06:20.250328 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-05 23:06:20.250405 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-05 23:06:20.250426 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-07-05 23:06:20.250435 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.250442 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-07-05 23:06:20.250450 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.250458 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-07-05 23:06:20.250466 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.250474 | orchestrator | 2025-07-05 23:06:20.250482 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-07-05 23:06:20.250491 | orchestrator | Saturday 05 July 2025 22:58:29 +0000 (0:00:01.662) 0:03:17.270 ********* 2025-07-05 23:06:20.250499 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.250507 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.250515 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.250523 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.250531 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.250539 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.250547 | orchestrator | 2025-07-05 23:06:20.250555 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-05 23:06:20.250563 | orchestrator | Saturday 05 July 2025 22:58:32 +0000 (0:00:02.177) 0:03:19.447 ********* 2025-07-05 23:06:20.250571 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.250579 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.250587 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.250595 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.250603 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.250611 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.250619 | orchestrator | 2025-07-05 23:06:20.250650 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-05 23:06:20.250659 | orchestrator | Saturday 05 July 2025 22:58:32 +0000 (0:00:00.980) 0:03:20.427 ********* 2025-07-05 23:06:20.250667 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.250675 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.250683 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.250692 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.250700 | orchestrator | 2025-07-05 23:06:20.250708 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-05 23:06:20.250716 | orchestrator | Saturday 05 July 2025 22:58:33 +0000 (0:00:00.850) 0:03:21.278 ********* 2025-07-05 23:06:20.250724 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.250732 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.250741 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.250749 | orchestrator | 2025-07-05 23:06:20.250797 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-05 23:06:20.250807 | orchestrator | Saturday 05 July 2025 22:58:34 +0000 (0:00:00.296) 0:03:21.574 ********* 2025-07-05 23:06:20.250815 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.250823 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.250831 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.250839 | orchestrator | 2025-07-05 23:06:20.250848 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-05 23:06:20.250863 | orchestrator | Saturday 05 July 2025 22:58:35 +0000 (0:00:01.568) 0:03:23.143 ********* 2025-07-05 23:06:20.250871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-05 23:06:20.250879 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-05 23:06:20.250892 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-05 23:06:20.250900 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.250909 | orchestrator | 2025-07-05 23:06:20.250917 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-05 23:06:20.250925 | orchestrator | Saturday 05 July 2025 22:58:36 +0000 (0:00:00.712) 0:03:23.856 ********* 2025-07-05 23:06:20.250933 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.250941 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.250949 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.250957 | orchestrator | 2025-07-05 23:06:20.250965 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-05 23:06:20.250973 | orchestrator | Saturday 05 July 2025 22:58:36 +0000 (0:00:00.383) 0:03:24.240 ********* 2025-07-05 23:06:20.250981 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.250989 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.250998 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.251006 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.251014 | orchestrator | 2025-07-05 23:06:20.251022 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-05 23:06:20.251031 | orchestrator | Saturday 05 July 2025 22:58:37 +0000 (0:00:01.114) 0:03:25.354 ********* 2025-07-05 23:06:20.251039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.251047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.251055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.251063 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251071 | orchestrator | 2025-07-05 23:06:20.251079 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-05 23:06:20.251087 | orchestrator | Saturday 05 July 2025 22:58:38 +0000 (0:00:00.457) 0:03:25.811 ********* 2025-07-05 23:06:20.251095 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251103 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.251111 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.251119 | orchestrator | 2025-07-05 23:06:20.251128 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-05 23:06:20.251136 | orchestrator | Saturday 05 July 2025 22:58:38 +0000 (0:00:00.325) 0:03:26.137 ********* 2025-07-05 23:06:20.251144 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251152 | orchestrator | 2025-07-05 23:06:20.251160 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-05 23:06:20.251168 | orchestrator | Saturday 05 July 2025 22:58:38 +0000 (0:00:00.253) 0:03:26.391 ********* 2025-07-05 23:06:20.251176 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251184 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.251192 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.251200 | orchestrator | 2025-07-05 23:06:20.251209 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-05 23:06:20.251217 | orchestrator | Saturday 05 July 2025 22:58:39 +0000 (0:00:00.331) 0:03:26.722 ********* 2025-07-05 23:06:20.251225 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251233 | orchestrator | 2025-07-05 23:06:20.251242 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-05 23:06:20.251250 | orchestrator | Saturday 05 July 2025 22:58:39 +0000 (0:00:00.270) 0:03:26.993 ********* 2025-07-05 23:06:20.251258 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251266 | orchestrator | 2025-07-05 23:06:20.251274 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-05 23:06:20.251283 | orchestrator | Saturday 05 July 2025 22:58:39 +0000 (0:00:00.213) 0:03:27.207 ********* 2025-07-05 23:06:20.251296 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251304 | orchestrator | 2025-07-05 23:06:20.251312 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-05 23:06:20.251320 | orchestrator | Saturday 05 July 2025 22:58:40 +0000 (0:00:00.327) 0:03:27.534 ********* 2025-07-05 23:06:20.251329 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251337 | orchestrator | 2025-07-05 23:06:20.251345 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-05 23:06:20.251353 | orchestrator | Saturday 05 July 2025 22:58:40 +0000 (0:00:00.241) 0:03:27.776 ********* 2025-07-05 23:06:20.251362 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251369 | orchestrator | 2025-07-05 23:06:20.251378 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-05 23:06:20.251386 | orchestrator | Saturday 05 July 2025 22:58:40 +0000 (0:00:00.256) 0:03:28.033 ********* 2025-07-05 23:06:20.251399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.251412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.251426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.251441 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251456 | orchestrator | 2025-07-05 23:06:20.251470 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-05 23:06:20.251485 | orchestrator | Saturday 05 July 2025 22:58:41 +0000 (0:00:00.417) 0:03:28.450 ********* 2025-07-05 23:06:20.251499 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251553 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.251570 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.251584 | orchestrator | 2025-07-05 23:06:20.251598 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-05 23:06:20.251612 | orchestrator | Saturday 05 July 2025 22:58:41 +0000 (0:00:00.345) 0:03:28.796 ********* 2025-07-05 23:06:20.251683 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251702 | orchestrator | 2025-07-05 23:06:20.251716 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-05 23:06:20.251731 | orchestrator | Saturday 05 July 2025 22:58:41 +0000 (0:00:00.228) 0:03:29.024 ********* 2025-07-05 23:06:20.251747 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.251760 | orchestrator | 2025-07-05 23:06:20.251775 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-05 23:06:20.251797 | orchestrator | Saturday 05 July 2025 22:58:41 +0000 (0:00:00.223) 0:03:29.248 ********* 2025-07-05 23:06:20.251811 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.251826 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.251839 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.251853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.251867 | orchestrator | 2025-07-05 23:06:20.251881 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-05 23:06:20.251895 | orchestrator | Saturday 05 July 2025 22:58:42 +0000 (0:00:01.001) 0:03:30.250 ********* 2025-07-05 23:06:20.251909 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.251923 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.251936 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.251950 | orchestrator | 2025-07-05 23:06:20.251964 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-05 23:06:20.251978 | orchestrator | Saturday 05 July 2025 22:58:43 +0000 (0:00:00.333) 0:03:30.583 ********* 2025-07-05 23:06:20.251991 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.252004 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.252019 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.252032 | orchestrator | 2025-07-05 23:06:20.252046 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-05 23:06:20.252071 | orchestrator | Saturday 05 July 2025 22:58:44 +0000 (0:00:01.211) 0:03:31.794 ********* 2025-07-05 23:06:20.252085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.252100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.252113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.252126 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.252139 | orchestrator | 2025-07-05 23:06:20.252153 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-05 23:06:20.252167 | orchestrator | Saturday 05 July 2025 22:58:45 +0000 (0:00:01.069) 0:03:32.864 ********* 2025-07-05 23:06:20.252181 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.252194 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.252207 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.252218 | orchestrator | 2025-07-05 23:06:20.252229 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-05 23:06:20.252241 | orchestrator | Saturday 05 July 2025 22:58:45 +0000 (0:00:00.353) 0:03:33.218 ********* 2025-07-05 23:06:20.252252 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.252263 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.252275 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.252286 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.252298 | orchestrator | 2025-07-05 23:06:20.252309 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-05 23:06:20.252320 | orchestrator | Saturday 05 July 2025 22:58:46 +0000 (0:00:00.887) 0:03:34.105 ********* 2025-07-05 23:06:20.252331 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.252343 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.252354 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.252366 | orchestrator | 2025-07-05 23:06:20.252377 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-05 23:06:20.252389 | orchestrator | Saturday 05 July 2025 22:58:47 +0000 (0:00:00.569) 0:03:34.675 ********* 2025-07-05 23:06:20.252396 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.252402 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.252409 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.252416 | orchestrator | 2025-07-05 23:06:20.252423 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-05 23:06:20.252430 | orchestrator | Saturday 05 July 2025 22:58:48 +0000 (0:00:01.333) 0:03:36.008 ********* 2025-07-05 23:06:20.252437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.252443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.252450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.252457 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.252464 | orchestrator | 2025-07-05 23:06:20.252470 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-05 23:06:20.252477 | orchestrator | Saturday 05 July 2025 22:58:49 +0000 (0:00:00.600) 0:03:36.609 ********* 2025-07-05 23:06:20.252484 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.252491 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.252497 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.252504 | orchestrator | 2025-07-05 23:06:20.252511 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-07-05 23:06:20.252517 | orchestrator | Saturday 05 July 2025 22:58:49 +0000 (0:00:00.335) 0:03:36.944 ********* 2025-07-05 23:06:20.252524 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.252531 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.252537 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.252544 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.252551 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.252558 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.252565 | orchestrator | 2025-07-05 23:06:20.252577 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-05 23:06:20.252675 | orchestrator | Saturday 05 July 2025 22:58:50 +0000 (0:00:00.960) 0:03:37.905 ********* 2025-07-05 23:06:20.252693 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.252704 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.252716 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.252726 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.252738 | orchestrator | 2025-07-05 23:06:20.252750 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-05 23:06:20.252761 | orchestrator | Saturday 05 July 2025 22:58:51 +0000 (0:00:01.118) 0:03:39.024 ********* 2025-07-05 23:06:20.252773 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.252784 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.252796 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.252807 | orchestrator | 2025-07-05 23:06:20.252826 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-05 23:06:20.252838 | orchestrator | Saturday 05 July 2025 22:58:51 +0000 (0:00:00.396) 0:03:39.421 ********* 2025-07-05 23:06:20.252850 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.252861 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.252872 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.252879 | orchestrator | 2025-07-05 23:06:20.252886 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-05 23:06:20.252892 | orchestrator | Saturday 05 July 2025 22:58:53 +0000 (0:00:01.265) 0:03:40.686 ********* 2025-07-05 23:06:20.252899 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-05 23:06:20.252906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-05 23:06:20.252913 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-05 23:06:20.252920 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.252927 | orchestrator | 2025-07-05 23:06:20.252933 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-05 23:06:20.252940 | orchestrator | Saturday 05 July 2025 22:58:54 +0000 (0:00:00.770) 0:03:41.456 ********* 2025-07-05 23:06:20.252947 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.252954 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.252960 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.252967 | orchestrator | 2025-07-05 23:06:20.252974 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-07-05 23:06:20.252984 | orchestrator | 2025-07-05 23:06:20.252995 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-05 23:06:20.253006 | orchestrator | Saturday 05 July 2025 22:58:54 +0000 (0:00:00.744) 0:03:42.201 ********* 2025-07-05 23:06:20.253018 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.253031 | orchestrator | 2025-07-05 23:06:20.253043 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-05 23:06:20.253054 | orchestrator | Saturday 05 July 2025 22:58:55 +0000 (0:00:00.500) 0:03:42.702 ********* 2025-07-05 23:06:20.253064 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.253072 | orchestrator | 2025-07-05 23:06:20.253078 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-05 23:06:20.253086 | orchestrator | Saturday 05 July 2025 22:58:55 +0000 (0:00:00.697) 0:03:43.399 ********* 2025-07-05 23:06:20.253093 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.253100 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.253106 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.253113 | orchestrator | 2025-07-05 23:06:20.253120 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-05 23:06:20.253127 | orchestrator | Saturday 05 July 2025 22:58:56 +0000 (0:00:00.762) 0:03:44.162 ********* 2025-07-05 23:06:20.253141 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253148 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253155 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253162 | orchestrator | 2025-07-05 23:06:20.253169 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-05 23:06:20.253176 | orchestrator | Saturday 05 July 2025 22:58:56 +0000 (0:00:00.283) 0:03:44.445 ********* 2025-07-05 23:06:20.253186 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253197 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253209 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253220 | orchestrator | 2025-07-05 23:06:20.253232 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-05 23:06:20.253243 | orchestrator | Saturday 05 July 2025 22:58:57 +0000 (0:00:00.334) 0:03:44.780 ********* 2025-07-05 23:06:20.253254 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253266 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253278 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253290 | orchestrator | 2025-07-05 23:06:20.253299 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-05 23:06:20.253306 | orchestrator | Saturday 05 July 2025 22:58:57 +0000 (0:00:00.482) 0:03:45.263 ********* 2025-07-05 23:06:20.253313 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.253319 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.253326 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.253333 | orchestrator | 2025-07-05 23:06:20.253339 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-05 23:06:20.253346 | orchestrator | Saturday 05 July 2025 22:58:58 +0000 (0:00:00.745) 0:03:46.008 ********* 2025-07-05 23:06:20.253353 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253360 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253367 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253373 | orchestrator | 2025-07-05 23:06:20.253380 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-05 23:06:20.253387 | orchestrator | Saturday 05 July 2025 22:58:58 +0000 (0:00:00.330) 0:03:46.339 ********* 2025-07-05 23:06:20.253394 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253400 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253407 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253414 | orchestrator | 2025-07-05 23:06:20.253454 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-05 23:06:20.253462 | orchestrator | Saturday 05 July 2025 22:58:59 +0000 (0:00:00.311) 0:03:46.650 ********* 2025-07-05 23:06:20.253469 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.253476 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.253482 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.253489 | orchestrator | 2025-07-05 23:06:20.253496 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-05 23:06:20.253503 | orchestrator | Saturday 05 July 2025 22:59:00 +0000 (0:00:01.045) 0:03:47.696 ********* 2025-07-05 23:06:20.253510 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.253517 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.253523 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.253530 | orchestrator | 2025-07-05 23:06:20.253537 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-05 23:06:20.253549 | orchestrator | Saturday 05 July 2025 22:59:01 +0000 (0:00:00.836) 0:03:48.532 ********* 2025-07-05 23:06:20.253556 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253563 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253570 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253576 | orchestrator | 2025-07-05 23:06:20.253583 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-05 23:06:20.253590 | orchestrator | Saturday 05 July 2025 22:59:01 +0000 (0:00:00.339) 0:03:48.872 ********* 2025-07-05 23:06:20.253597 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.253609 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.253616 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.253650 | orchestrator | 2025-07-05 23:06:20.253658 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-05 23:06:20.253665 | orchestrator | Saturday 05 July 2025 22:59:01 +0000 (0:00:00.327) 0:03:49.199 ********* 2025-07-05 23:06:20.253675 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253687 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253697 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253708 | orchestrator | 2025-07-05 23:06:20.253720 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-05 23:06:20.253731 | orchestrator | Saturday 05 July 2025 22:59:02 +0000 (0:00:00.513) 0:03:49.713 ********* 2025-07-05 23:06:20.253743 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253754 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253765 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253777 | orchestrator | 2025-07-05 23:06:20.253787 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-05 23:06:20.253799 | orchestrator | Saturday 05 July 2025 22:59:02 +0000 (0:00:00.318) 0:03:50.031 ********* 2025-07-05 23:06:20.253811 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253822 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253833 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253841 | orchestrator | 2025-07-05 23:06:20.253848 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-05 23:06:20.253855 | orchestrator | Saturday 05 July 2025 22:59:02 +0000 (0:00:00.312) 0:03:50.344 ********* 2025-07-05 23:06:20.253862 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253869 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253875 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253882 | orchestrator | 2025-07-05 23:06:20.253889 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-05 23:06:20.253897 | orchestrator | Saturday 05 July 2025 22:59:03 +0000 (0:00:00.272) 0:03:50.616 ********* 2025-07-05 23:06:20.253903 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.253914 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.253925 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.253936 | orchestrator | 2025-07-05 23:06:20.253948 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-05 23:06:20.253959 | orchestrator | Saturday 05 July 2025 22:59:03 +0000 (0:00:00.388) 0:03:51.005 ********* 2025-07-05 23:06:20.253970 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.253981 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.253993 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.254003 | orchestrator | 2025-07-05 23:06:20.254069 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-05 23:06:20.254086 | orchestrator | Saturday 05 July 2025 22:59:03 +0000 (0:00:00.275) 0:03:51.281 ********* 2025-07-05 23:06:20.254098 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.254109 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.254121 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.254132 | orchestrator | 2025-07-05 23:06:20.254144 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-05 23:06:20.254155 | orchestrator | Saturday 05 July 2025 22:59:04 +0000 (0:00:00.358) 0:03:51.639 ********* 2025-07-05 23:06:20.254166 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.254177 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.254189 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.254202 | orchestrator | 2025-07-05 23:06:20.254214 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-07-05 23:06:20.254225 | orchestrator | Saturday 05 July 2025 22:59:04 +0000 (0:00:00.639) 0:03:52.279 ********* 2025-07-05 23:06:20.254235 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.254245 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.254255 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.254275 | orchestrator | 2025-07-05 23:06:20.254287 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-07-05 23:06:20.254298 | orchestrator | Saturday 05 July 2025 22:59:05 +0000 (0:00:00.271) 0:03:52.550 ********* 2025-07-05 23:06:20.254310 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.254322 | orchestrator | 2025-07-05 23:06:20.254332 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-07-05 23:06:20.254344 | orchestrator | Saturday 05 July 2025 22:59:05 +0000 (0:00:00.513) 0:03:53.064 ********* 2025-07-05 23:06:20.254356 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.254367 | orchestrator | 2025-07-05 23:06:20.254379 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-07-05 23:06:20.254434 | orchestrator | Saturday 05 July 2025 22:59:05 +0000 (0:00:00.139) 0:03:53.203 ********* 2025-07-05 23:06:20.254448 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-05 23:06:20.254460 | orchestrator | 2025-07-05 23:06:20.254472 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-07-05 23:06:20.254483 | orchestrator | Saturday 05 July 2025 22:59:06 +0000 (0:00:01.079) 0:03:54.283 ********* 2025-07-05 23:06:20.254494 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.254506 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.254518 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.254528 | orchestrator | 2025-07-05 23:06:20.254540 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-07-05 23:06:20.254551 | orchestrator | Saturday 05 July 2025 22:59:07 +0000 (0:00:00.311) 0:03:54.594 ********* 2025-07-05 23:06:20.254563 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.254574 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.254585 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.254596 | orchestrator | 2025-07-05 23:06:20.254616 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-07-05 23:06:20.254767 | orchestrator | Saturday 05 July 2025 22:59:07 +0000 (0:00:00.316) 0:03:54.910 ********* 2025-07-05 23:06:20.254783 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.254792 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.254798 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.254805 | orchestrator | 2025-07-05 23:06:20.254812 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-07-05 23:06:20.254819 | orchestrator | Saturday 05 July 2025 22:59:08 +0000 (0:00:01.135) 0:03:56.046 ********* 2025-07-05 23:06:20.254826 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.254833 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.254840 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.254847 | orchestrator | 2025-07-05 23:06:20.254854 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-07-05 23:06:20.254861 | orchestrator | Saturday 05 July 2025 22:59:09 +0000 (0:00:00.964) 0:03:57.011 ********* 2025-07-05 23:06:20.254868 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.254875 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.254881 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.254888 | orchestrator | 2025-07-05 23:06:20.254895 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-07-05 23:06:20.254903 | orchestrator | Saturday 05 July 2025 22:59:10 +0000 (0:00:00.685) 0:03:57.696 ********* 2025-07-05 23:06:20.254909 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.254916 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.254923 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.254930 | orchestrator | 2025-07-05 23:06:20.254937 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-07-05 23:06:20.254944 | orchestrator | Saturday 05 July 2025 22:59:10 +0000 (0:00:00.742) 0:03:58.438 ********* 2025-07-05 23:06:20.254951 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.254957 | orchestrator | 2025-07-05 23:06:20.254964 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-07-05 23:06:20.254979 | orchestrator | Saturday 05 July 2025 22:59:12 +0000 (0:00:01.358) 0:03:59.797 ********* 2025-07-05 23:06:20.254985 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.254991 | orchestrator | 2025-07-05 23:06:20.254998 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-07-05 23:06:20.255005 | orchestrator | Saturday 05 July 2025 22:59:13 +0000 (0:00:00.685) 0:04:00.483 ********* 2025-07-05 23:06:20.255011 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-05 23:06:20.255017 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.255024 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.255030 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-05 23:06:20.255037 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-07-05 23:06:20.255043 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-05 23:06:20.255050 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-05 23:06:20.255056 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-07-05 23:06:20.255062 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-05 23:06:20.255069 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-07-05 23:06:20.255075 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-07-05 23:06:20.255081 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-07-05 23:06:20.255088 | orchestrator | 2025-07-05 23:06:20.255094 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-07-05 23:06:20.255101 | orchestrator | Saturday 05 July 2025 22:59:16 +0000 (0:00:03.278) 0:04:03.762 ********* 2025-07-05 23:06:20.255107 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.255113 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.255120 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.255126 | orchestrator | 2025-07-05 23:06:20.255133 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-07-05 23:06:20.255139 | orchestrator | Saturday 05 July 2025 22:59:17 +0000 (0:00:01.486) 0:04:05.248 ********* 2025-07-05 23:06:20.255145 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.255152 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.255158 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.255165 | orchestrator | 2025-07-05 23:06:20.255171 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-07-05 23:06:20.255177 | orchestrator | Saturday 05 July 2025 22:59:18 +0000 (0:00:00.325) 0:04:05.574 ********* 2025-07-05 23:06:20.255184 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.255190 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.255196 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.255202 | orchestrator | 2025-07-05 23:06:20.255209 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-07-05 23:06:20.255215 | orchestrator | Saturday 05 July 2025 22:59:18 +0000 (0:00:00.358) 0:04:05.932 ********* 2025-07-05 23:06:20.255222 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.255229 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.255235 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.255241 | orchestrator | 2025-07-05 23:06:20.255291 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-07-05 23:06:20.255299 | orchestrator | Saturday 05 July 2025 22:59:20 +0000 (0:00:01.711) 0:04:07.643 ********* 2025-07-05 23:06:20.255306 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.255312 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.255318 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.255325 | orchestrator | 2025-07-05 23:06:20.255331 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-07-05 23:06:20.255338 | orchestrator | Saturday 05 July 2025 22:59:21 +0000 (0:00:01.540) 0:04:09.184 ********* 2025-07-05 23:06:20.255344 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.255360 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.255366 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.255373 | orchestrator | 2025-07-05 23:06:20.255379 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-07-05 23:06:20.255390 | orchestrator | Saturday 05 July 2025 22:59:22 +0000 (0:00:00.320) 0:04:09.505 ********* 2025-07-05 23:06:20.255397 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.255403 | orchestrator | 2025-07-05 23:06:20.255410 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-07-05 23:06:20.255416 | orchestrator | Saturday 05 July 2025 22:59:22 +0000 (0:00:00.492) 0:04:09.997 ********* 2025-07-05 23:06:20.255422 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.255429 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.255435 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.255441 | orchestrator | 2025-07-05 23:06:20.255448 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-07-05 23:06:20.255455 | orchestrator | Saturday 05 July 2025 22:59:23 +0000 (0:00:00.493) 0:04:10.491 ********* 2025-07-05 23:06:20.255461 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.255467 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.255474 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.255480 | orchestrator | 2025-07-05 23:06:20.255486 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-07-05 23:06:20.255493 | orchestrator | Saturday 05 July 2025 22:59:23 +0000 (0:00:00.292) 0:04:10.784 ********* 2025-07-05 23:06:20.255499 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.255505 | orchestrator | 2025-07-05 23:06:20.255512 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-07-05 23:06:20.255518 | orchestrator | Saturday 05 July 2025 22:59:23 +0000 (0:00:00.486) 0:04:11.271 ********* 2025-07-05 23:06:20.255525 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.255531 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.255537 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.255544 | orchestrator | 2025-07-05 23:06:20.255550 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-07-05 23:06:20.255556 | orchestrator | Saturday 05 July 2025 22:59:25 +0000 (0:00:02.044) 0:04:13.315 ********* 2025-07-05 23:06:20.255563 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.255569 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.255575 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.255581 | orchestrator | 2025-07-05 23:06:20.255588 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-07-05 23:06:20.255594 | orchestrator | Saturday 05 July 2025 22:59:27 +0000 (0:00:01.151) 0:04:14.467 ********* 2025-07-05 23:06:20.255601 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.255607 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.255613 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.255619 | orchestrator | 2025-07-05 23:06:20.255649 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-07-05 23:06:20.255660 | orchestrator | Saturday 05 July 2025 22:59:28 +0000 (0:00:01.693) 0:04:16.161 ********* 2025-07-05 23:06:20.255666 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.255673 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.255679 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.255686 | orchestrator | 2025-07-05 23:06:20.255692 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-07-05 23:06:20.255698 | orchestrator | Saturday 05 July 2025 22:59:30 +0000 (0:00:01.901) 0:04:18.063 ********* 2025-07-05 23:06:20.255704 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.255711 | orchestrator | 2025-07-05 23:06:20.255717 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-07-05 23:06:20.255729 | orchestrator | Saturday 05 July 2025 22:59:31 +0000 (0:00:00.926) 0:04:18.989 ********* 2025-07-05 23:06:20.255735 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-07-05 23:06:20.255741 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.255748 | orchestrator | 2025-07-05 23:06:20.255754 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-07-05 23:06:20.255760 | orchestrator | Saturday 05 July 2025 22:59:53 +0000 (0:00:21.976) 0:04:40.965 ********* 2025-07-05 23:06:20.255767 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.255773 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.255779 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.255785 | orchestrator | 2025-07-05 23:06:20.255792 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-07-05 23:06:20.255798 | orchestrator | Saturday 05 July 2025 23:00:03 +0000 (0:00:09.652) 0:04:50.617 ********* 2025-07-05 23:06:20.255805 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.255811 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.255817 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.255823 | orchestrator | 2025-07-05 23:06:20.255830 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-07-05 23:06:20.255836 | orchestrator | Saturday 05 July 2025 23:00:03 +0000 (0:00:00.322) 0:04:50.939 ********* 2025-07-05 23:06:20.255868 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66a504b7063c6bec1f4cf32d8b4339549a7e8eb5'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-07-05 23:06:20.255882 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66a504b7063c6bec1f4cf32d8b4339549a7e8eb5'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-07-05 23:06:20.255890 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66a504b7063c6bec1f4cf32d8b4339549a7e8eb5'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-07-05 23:06:20.255897 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66a504b7063c6bec1f4cf32d8b4339549a7e8eb5'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-07-05 23:06:20.255905 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66a504b7063c6bec1f4cf32d8b4339549a7e8eb5'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-07-05 23:06:20.255912 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66a504b7063c6bec1f4cf32d8b4339549a7e8eb5'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__66a504b7063c6bec1f4cf32d8b4339549a7e8eb5'}])  2025-07-05 23:06:20.255920 | orchestrator | 2025-07-05 23:06:20.255930 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-05 23:06:20.255937 | orchestrator | Saturday 05 July 2025 23:00:19 +0000 (0:00:15.778) 0:05:06.718 ********* 2025-07-05 23:06:20.255943 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.255950 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.255956 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.255962 | orchestrator | 2025-07-05 23:06:20.255968 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-05 23:06:20.255975 | orchestrator | Saturday 05 July 2025 23:00:19 +0000 (0:00:00.334) 0:05:07.052 ********* 2025-07-05 23:06:20.255981 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.255988 | orchestrator | 2025-07-05 23:06:20.255994 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-05 23:06:20.256001 | orchestrator | Saturday 05 July 2025 23:00:20 +0000 (0:00:00.788) 0:05:07.841 ********* 2025-07-05 23:06:20.256007 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.256013 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.256019 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.256026 | orchestrator | 2025-07-05 23:06:20.256032 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-05 23:06:20.256039 | orchestrator | Saturday 05 July 2025 23:00:20 +0000 (0:00:00.358) 0:05:08.199 ********* 2025-07-05 23:06:20.256045 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256052 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256058 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256064 | orchestrator | 2025-07-05 23:06:20.256071 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-05 23:06:20.256077 | orchestrator | Saturday 05 July 2025 23:00:21 +0000 (0:00:00.364) 0:05:08.564 ********* 2025-07-05 23:06:20.256083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-05 23:06:20.256090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-05 23:06:20.256096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-05 23:06:20.256102 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256109 | orchestrator | 2025-07-05 23:06:20.256115 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-05 23:06:20.256121 | orchestrator | Saturday 05 July 2025 23:00:21 +0000 (0:00:00.843) 0:05:09.407 ********* 2025-07-05 23:06:20.256128 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.256134 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.256140 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.256146 | orchestrator | 2025-07-05 23:06:20.256171 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-07-05 23:06:20.256179 | orchestrator | 2025-07-05 23:06:20.256185 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-05 23:06:20.256192 | orchestrator | Saturday 05 July 2025 23:00:22 +0000 (0:00:00.867) 0:05:10.274 ********* 2025-07-05 23:06:20.256198 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.256205 | orchestrator | 2025-07-05 23:06:20.256211 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-05 23:06:20.256218 | orchestrator | Saturday 05 July 2025 23:00:23 +0000 (0:00:00.525) 0:05:10.800 ********* 2025-07-05 23:06:20.256224 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.256231 | orchestrator | 2025-07-05 23:06:20.256241 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-05 23:06:20.256247 | orchestrator | Saturday 05 July 2025 23:00:24 +0000 (0:00:00.740) 0:05:11.541 ********* 2025-07-05 23:06:20.256253 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.256260 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.256266 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.256277 | orchestrator | 2025-07-05 23:06:20.256283 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-05 23:06:20.256290 | orchestrator | Saturday 05 July 2025 23:00:24 +0000 (0:00:00.766) 0:05:12.308 ********* 2025-07-05 23:06:20.256296 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256302 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256309 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256315 | orchestrator | 2025-07-05 23:06:20.256321 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-05 23:06:20.256327 | orchestrator | Saturday 05 July 2025 23:00:25 +0000 (0:00:00.364) 0:05:12.672 ********* 2025-07-05 23:06:20.256334 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256340 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256347 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256353 | orchestrator | 2025-07-05 23:06:20.256359 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-05 23:06:20.256365 | orchestrator | Saturday 05 July 2025 23:00:25 +0000 (0:00:00.560) 0:05:13.233 ********* 2025-07-05 23:06:20.256372 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256378 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256384 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256390 | orchestrator | 2025-07-05 23:06:20.256396 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-05 23:06:20.256403 | orchestrator | Saturday 05 July 2025 23:00:26 +0000 (0:00:00.309) 0:05:13.543 ********* 2025-07-05 23:06:20.256409 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.256415 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.256422 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.256428 | orchestrator | 2025-07-05 23:06:20.256434 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-05 23:06:20.256441 | orchestrator | Saturday 05 July 2025 23:00:26 +0000 (0:00:00.744) 0:05:14.287 ********* 2025-07-05 23:06:20.256447 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256453 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256459 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256466 | orchestrator | 2025-07-05 23:06:20.256472 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-05 23:06:20.256478 | orchestrator | Saturday 05 July 2025 23:00:27 +0000 (0:00:00.323) 0:05:14.611 ********* 2025-07-05 23:06:20.256485 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256491 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256497 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256503 | orchestrator | 2025-07-05 23:06:20.256510 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-05 23:06:20.256516 | orchestrator | Saturday 05 July 2025 23:00:27 +0000 (0:00:00.526) 0:05:15.137 ********* 2025-07-05 23:06:20.256522 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.256528 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.256535 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.256541 | orchestrator | 2025-07-05 23:06:20.256547 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-05 23:06:20.256554 | orchestrator | Saturday 05 July 2025 23:00:28 +0000 (0:00:00.797) 0:05:15.934 ********* 2025-07-05 23:06:20.256560 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.256566 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.256573 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.256579 | orchestrator | 2025-07-05 23:06:20.256585 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-05 23:06:20.256592 | orchestrator | Saturday 05 July 2025 23:00:29 +0000 (0:00:00.764) 0:05:16.698 ********* 2025-07-05 23:06:20.256598 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256604 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256610 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256617 | orchestrator | 2025-07-05 23:06:20.256640 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-05 23:06:20.256652 | orchestrator | Saturday 05 July 2025 23:00:29 +0000 (0:00:00.348) 0:05:17.047 ********* 2025-07-05 23:06:20.256659 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.256665 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.256671 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.256678 | orchestrator | 2025-07-05 23:06:20.256684 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-05 23:06:20.256690 | orchestrator | Saturday 05 July 2025 23:00:30 +0000 (0:00:00.565) 0:05:17.612 ********* 2025-07-05 23:06:20.256696 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256703 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256709 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256716 | orchestrator | 2025-07-05 23:06:20.256722 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-05 23:06:20.256729 | orchestrator | Saturday 05 July 2025 23:00:30 +0000 (0:00:00.363) 0:05:17.976 ********* 2025-07-05 23:06:20.256735 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256741 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256768 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256776 | orchestrator | 2025-07-05 23:06:20.256782 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-05 23:06:20.256788 | orchestrator | Saturday 05 July 2025 23:00:30 +0000 (0:00:00.370) 0:05:18.346 ********* 2025-07-05 23:06:20.256795 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256801 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256807 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256813 | orchestrator | 2025-07-05 23:06:20.256819 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-05 23:06:20.256826 | orchestrator | Saturday 05 July 2025 23:00:31 +0000 (0:00:00.303) 0:05:18.649 ********* 2025-07-05 23:06:20.256832 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256838 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256845 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256851 | orchestrator | 2025-07-05 23:06:20.256863 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-05 23:06:20.256870 | orchestrator | Saturday 05 July 2025 23:00:31 +0000 (0:00:00.514) 0:05:19.164 ********* 2025-07-05 23:06:20.256876 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.256882 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.256889 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.256895 | orchestrator | 2025-07-05 23:06:20.256901 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-05 23:06:20.256907 | orchestrator | Saturday 05 July 2025 23:00:32 +0000 (0:00:00.305) 0:05:19.470 ********* 2025-07-05 23:06:20.256914 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.256920 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.256926 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.256933 | orchestrator | 2025-07-05 23:06:20.256939 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-05 23:06:20.256945 | orchestrator | Saturday 05 July 2025 23:00:32 +0000 (0:00:00.320) 0:05:19.790 ********* 2025-07-05 23:06:20.256952 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.256958 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.256964 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.256970 | orchestrator | 2025-07-05 23:06:20.256977 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-05 23:06:20.256983 | orchestrator | Saturday 05 July 2025 23:00:32 +0000 (0:00:00.379) 0:05:20.169 ********* 2025-07-05 23:06:20.256990 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.256996 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.257003 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.257009 | orchestrator | 2025-07-05 23:06:20.257015 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-07-05 23:06:20.257021 | orchestrator | Saturday 05 July 2025 23:00:33 +0000 (0:00:00.830) 0:05:21.000 ********* 2025-07-05 23:06:20.257033 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-05 23:06:20.257039 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-05 23:06:20.257046 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-05 23:06:20.257052 | orchestrator | 2025-07-05 23:06:20.257058 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-07-05 23:06:20.257065 | orchestrator | Saturday 05 July 2025 23:00:34 +0000 (0:00:00.670) 0:05:21.670 ********* 2025-07-05 23:06:20.257071 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.257077 | orchestrator | 2025-07-05 23:06:20.257083 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-07-05 23:06:20.257090 | orchestrator | Saturday 05 July 2025 23:00:34 +0000 (0:00:00.505) 0:05:22.176 ********* 2025-07-05 23:06:20.257096 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.257102 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.257109 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.257115 | orchestrator | 2025-07-05 23:06:20.257121 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-07-05 23:06:20.257128 | orchestrator | Saturday 05 July 2025 23:00:35 +0000 (0:00:00.969) 0:05:23.145 ********* 2025-07-05 23:06:20.257134 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.257140 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.257146 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.257153 | orchestrator | 2025-07-05 23:06:20.257159 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-07-05 23:06:20.257165 | orchestrator | Saturday 05 July 2025 23:00:36 +0000 (0:00:00.337) 0:05:23.483 ********* 2025-07-05 23:06:20.257172 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-05 23:06:20.257178 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-05 23:06:20.257184 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-05 23:06:20.257191 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-07-05 23:06:20.257197 | orchestrator | 2025-07-05 23:06:20.257203 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-07-05 23:06:20.257210 | orchestrator | Saturday 05 July 2025 23:00:46 +0000 (0:00:10.566) 0:05:34.050 ********* 2025-07-05 23:06:20.257216 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.257222 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.257228 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.257235 | orchestrator | 2025-07-05 23:06:20.257241 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-07-05 23:06:20.257247 | orchestrator | Saturday 05 July 2025 23:00:46 +0000 (0:00:00.342) 0:05:34.392 ********* 2025-07-05 23:06:20.257254 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-05 23:06:20.257260 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-05 23:06:20.257266 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-05 23:06:20.257273 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-05 23:06:20.257279 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.257286 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.257292 | orchestrator | 2025-07-05 23:06:20.257317 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-07-05 23:06:20.257324 | orchestrator | Saturday 05 July 2025 23:00:49 +0000 (0:00:02.371) 0:05:36.764 ********* 2025-07-05 23:06:20.257331 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-05 23:06:20.257337 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-05 23:06:20.257343 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-05 23:06:20.257350 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-05 23:06:20.257360 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-05 23:06:20.257366 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-05 23:06:20.257373 | orchestrator | 2025-07-05 23:06:20.257379 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-07-05 23:06:20.257385 | orchestrator | Saturday 05 July 2025 23:00:50 +0000 (0:00:01.593) 0:05:38.358 ********* 2025-07-05 23:06:20.257395 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.257402 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.257408 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.257414 | orchestrator | 2025-07-05 23:06:20.257421 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-07-05 23:06:20.257427 | orchestrator | Saturday 05 July 2025 23:00:51 +0000 (0:00:00.701) 0:05:39.059 ********* 2025-07-05 23:06:20.257433 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.257440 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.257446 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.257452 | orchestrator | 2025-07-05 23:06:20.257458 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-07-05 23:06:20.257465 | orchestrator | Saturday 05 July 2025 23:00:51 +0000 (0:00:00.314) 0:05:39.374 ********* 2025-07-05 23:06:20.257471 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.257477 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.257484 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.257490 | orchestrator | 2025-07-05 23:06:20.257496 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-07-05 23:06:20.257502 | orchestrator | Saturday 05 July 2025 23:00:52 +0000 (0:00:00.300) 0:05:39.675 ********* 2025-07-05 23:06:20.257509 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.257515 | orchestrator | 2025-07-05 23:06:20.257521 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-07-05 23:06:20.257528 | orchestrator | Saturday 05 July 2025 23:00:52 +0000 (0:00:00.751) 0:05:40.426 ********* 2025-07-05 23:06:20.257534 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.257540 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.257546 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.257553 | orchestrator | 2025-07-05 23:06:20.257559 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-07-05 23:06:20.257566 | orchestrator | Saturday 05 July 2025 23:00:53 +0000 (0:00:00.327) 0:05:40.754 ********* 2025-07-05 23:06:20.257572 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.257578 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.257584 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.257591 | orchestrator | 2025-07-05 23:06:20.257597 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-07-05 23:06:20.257604 | orchestrator | Saturday 05 July 2025 23:00:53 +0000 (0:00:00.321) 0:05:41.075 ********* 2025-07-05 23:06:20.257610 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.257616 | orchestrator | 2025-07-05 23:06:20.257673 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-07-05 23:06:20.257681 | orchestrator | Saturday 05 July 2025 23:00:54 +0000 (0:00:00.749) 0:05:41.825 ********* 2025-07-05 23:06:20.257688 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.257694 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.257700 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.257707 | orchestrator | 2025-07-05 23:06:20.257713 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-07-05 23:06:20.257719 | orchestrator | Saturday 05 July 2025 23:00:55 +0000 (0:00:01.296) 0:05:43.122 ********* 2025-07-05 23:06:20.257726 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.257732 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.257738 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.257750 | orchestrator | 2025-07-05 23:06:20.257756 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-07-05 23:06:20.257763 | orchestrator | Saturday 05 July 2025 23:00:56 +0000 (0:00:01.109) 0:05:44.231 ********* 2025-07-05 23:06:20.257769 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.257775 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.257782 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.257788 | orchestrator | 2025-07-05 23:06:20.257794 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-07-05 23:06:20.257801 | orchestrator | Saturday 05 July 2025 23:00:58 +0000 (0:00:01.942) 0:05:46.174 ********* 2025-07-05 23:06:20.257807 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.257813 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.257819 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.257826 | orchestrator | 2025-07-05 23:06:20.257832 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-07-05 23:06:20.257839 | orchestrator | Saturday 05 July 2025 23:01:00 +0000 (0:00:01.882) 0:05:48.056 ********* 2025-07-05 23:06:20.257845 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.257852 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.257858 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-07-05 23:06:20.257865 | orchestrator | 2025-07-05 23:06:20.257871 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-07-05 23:06:20.257877 | orchestrator | Saturday 05 July 2025 23:01:01 +0000 (0:00:00.432) 0:05:48.489 ********* 2025-07-05 23:06:20.257883 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-07-05 23:06:20.257912 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-07-05 23:06:20.257921 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-07-05 23:06:20.257927 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-07-05 23:06:20.257934 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-07-05 23:06:20.257940 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-07-05 23:06:20.257946 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:06:20.257953 | orchestrator | 2025-07-05 23:06:20.257963 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-07-05 23:06:20.257970 | orchestrator | Saturday 05 July 2025 23:01:37 +0000 (0:00:36.354) 0:06:24.843 ********* 2025-07-05 23:06:20.257976 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:06:20.257983 | orchestrator | 2025-07-05 23:06:20.257989 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-07-05 23:06:20.257996 | orchestrator | Saturday 05 July 2025 23:01:38 +0000 (0:00:01.589) 0:06:26.432 ********* 2025-07-05 23:06:20.258002 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.258008 | orchestrator | 2025-07-05 23:06:20.258038 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-07-05 23:06:20.258047 | orchestrator | Saturday 05 July 2025 23:01:39 +0000 (0:00:00.528) 0:06:26.961 ********* 2025-07-05 23:06:20.258053 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.258059 | orchestrator | 2025-07-05 23:06:20.258066 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-07-05 23:06:20.258072 | orchestrator | Saturday 05 July 2025 23:01:39 +0000 (0:00:00.152) 0:06:27.113 ********* 2025-07-05 23:06:20.258079 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-07-05 23:06:20.258085 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-07-05 23:06:20.258092 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-07-05 23:06:20.258103 | orchestrator | 2025-07-05 23:06:20.258109 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-07-05 23:06:20.258116 | orchestrator | Saturday 05 July 2025 23:01:46 +0000 (0:00:06.628) 0:06:33.742 ********* 2025-07-05 23:06:20.258122 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-07-05 23:06:20.258128 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-07-05 23:06:20.258135 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-07-05 23:06:20.258141 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-07-05 23:06:20.258148 | orchestrator | 2025-07-05 23:06:20.258154 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-05 23:06:20.258161 | orchestrator | Saturday 05 July 2025 23:01:51 +0000 (0:00:04.820) 0:06:38.562 ********* 2025-07-05 23:06:20.258167 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.258173 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.258180 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.258186 | orchestrator | 2025-07-05 23:06:20.258193 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-05 23:06:20.258200 | orchestrator | Saturday 05 July 2025 23:01:51 +0000 (0:00:00.868) 0:06:39.431 ********* 2025-07-05 23:06:20.258206 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.258212 | orchestrator | 2025-07-05 23:06:20.258219 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-05 23:06:20.258225 | orchestrator | Saturday 05 July 2025 23:01:52 +0000 (0:00:00.522) 0:06:39.953 ********* 2025-07-05 23:06:20.258232 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.258238 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.258245 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.258251 | orchestrator | 2025-07-05 23:06:20.258258 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-05 23:06:20.258264 | orchestrator | Saturday 05 July 2025 23:01:52 +0000 (0:00:00.303) 0:06:40.257 ********* 2025-07-05 23:06:20.258270 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.258277 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.258283 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.258289 | orchestrator | 2025-07-05 23:06:20.258296 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-05 23:06:20.258302 | orchestrator | Saturday 05 July 2025 23:01:54 +0000 (0:00:01.544) 0:06:41.801 ********* 2025-07-05 23:06:20.258309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-05 23:06:20.258315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-05 23:06:20.258321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-05 23:06:20.258328 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.258334 | orchestrator | 2025-07-05 23:06:20.258341 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-05 23:06:20.258347 | orchestrator | Saturday 05 July 2025 23:01:54 +0000 (0:00:00.623) 0:06:42.425 ********* 2025-07-05 23:06:20.258353 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.258360 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.258366 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.258372 | orchestrator | 2025-07-05 23:06:20.258379 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-07-05 23:06:20.258385 | orchestrator | 2025-07-05 23:06:20.258392 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-05 23:06:20.258398 | orchestrator | Saturday 05 July 2025 23:01:55 +0000 (0:00:00.610) 0:06:43.035 ********* 2025-07-05 23:06:20.258427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.258435 | orchestrator | 2025-07-05 23:06:20.258441 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-05 23:06:20.258455 | orchestrator | Saturday 05 July 2025 23:01:56 +0000 (0:00:00.700) 0:06:43.736 ********* 2025-07-05 23:06:20.258461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.258468 | orchestrator | 2025-07-05 23:06:20.258474 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-05 23:06:20.258480 | orchestrator | Saturday 05 July 2025 23:01:56 +0000 (0:00:00.504) 0:06:44.241 ********* 2025-07-05 23:06:20.258487 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.258497 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.258503 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.258510 | orchestrator | 2025-07-05 23:06:20.258517 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-05 23:06:20.258523 | orchestrator | Saturday 05 July 2025 23:01:57 +0000 (0:00:00.305) 0:06:44.547 ********* 2025-07-05 23:06:20.258530 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.258536 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.258543 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.258549 | orchestrator | 2025-07-05 23:06:20.258555 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-05 23:06:20.258562 | orchestrator | Saturday 05 July 2025 23:01:58 +0000 (0:00:01.028) 0:06:45.575 ********* 2025-07-05 23:06:20.258569 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.258575 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.258581 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.258588 | orchestrator | 2025-07-05 23:06:20.258594 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-05 23:06:20.258600 | orchestrator | Saturday 05 July 2025 23:01:58 +0000 (0:00:00.737) 0:06:46.313 ********* 2025-07-05 23:06:20.258607 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.258613 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.258619 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.258643 | orchestrator | 2025-07-05 23:06:20.258649 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-05 23:06:20.258656 | orchestrator | Saturday 05 July 2025 23:01:59 +0000 (0:00:00.737) 0:06:47.050 ********* 2025-07-05 23:06:20.258663 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.258669 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.258675 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.258682 | orchestrator | 2025-07-05 23:06:20.258689 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-05 23:06:20.258695 | orchestrator | Saturday 05 July 2025 23:01:59 +0000 (0:00:00.329) 0:06:47.380 ********* 2025-07-05 23:06:20.258701 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.258708 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.258714 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.258721 | orchestrator | 2025-07-05 23:06:20.258727 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-05 23:06:20.258733 | orchestrator | Saturday 05 July 2025 23:02:00 +0000 (0:00:00.579) 0:06:47.960 ********* 2025-07-05 23:06:20.258740 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.258746 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.258752 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.258759 | orchestrator | 2025-07-05 23:06:20.258765 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-05 23:06:20.258771 | orchestrator | Saturday 05 July 2025 23:02:00 +0000 (0:00:00.304) 0:06:48.264 ********* 2025-07-05 23:06:20.258778 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.258784 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.258790 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.258796 | orchestrator | 2025-07-05 23:06:20.258803 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-05 23:06:20.258809 | orchestrator | Saturday 05 July 2025 23:02:01 +0000 (0:00:00.725) 0:06:48.990 ********* 2025-07-05 23:06:20.258820 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.258826 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.258833 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.258839 | orchestrator | 2025-07-05 23:06:20.258845 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-05 23:06:20.258852 | orchestrator | Saturday 05 July 2025 23:02:02 +0000 (0:00:00.736) 0:06:49.727 ********* 2025-07-05 23:06:20.258858 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.258864 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.258870 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.258877 | orchestrator | 2025-07-05 23:06:20.258883 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-05 23:06:20.258889 | orchestrator | Saturday 05 July 2025 23:02:02 +0000 (0:00:00.565) 0:06:50.292 ********* 2025-07-05 23:06:20.258895 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.258902 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.258908 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.258914 | orchestrator | 2025-07-05 23:06:20.258920 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-05 23:06:20.258926 | orchestrator | Saturday 05 July 2025 23:02:03 +0000 (0:00:00.309) 0:06:50.602 ********* 2025-07-05 23:06:20.258933 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.258939 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.258945 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.258951 | orchestrator | 2025-07-05 23:06:20.258958 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-05 23:06:20.258964 | orchestrator | Saturday 05 July 2025 23:02:03 +0000 (0:00:00.338) 0:06:50.941 ********* 2025-07-05 23:06:20.258970 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.258976 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.258983 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.258989 | orchestrator | 2025-07-05 23:06:20.258995 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-05 23:06:20.259001 | orchestrator | Saturday 05 July 2025 23:02:03 +0000 (0:00:00.364) 0:06:51.305 ********* 2025-07-05 23:06:20.259007 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.259014 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.259024 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.259030 | orchestrator | 2025-07-05 23:06:20.259036 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-05 23:06:20.259043 | orchestrator | Saturday 05 July 2025 23:02:04 +0000 (0:00:00.655) 0:06:51.961 ********* 2025-07-05 23:06:20.259049 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.259055 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.259062 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.259068 | orchestrator | 2025-07-05 23:06:20.259074 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-05 23:06:20.259080 | orchestrator | Saturday 05 July 2025 23:02:04 +0000 (0:00:00.306) 0:06:52.267 ********* 2025-07-05 23:06:20.259087 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.259093 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.259099 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.259105 | orchestrator | 2025-07-05 23:06:20.259115 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-05 23:06:20.259122 | orchestrator | Saturday 05 July 2025 23:02:05 +0000 (0:00:00.318) 0:06:52.586 ********* 2025-07-05 23:06:20.259128 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.259134 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.259140 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.259147 | orchestrator | 2025-07-05 23:06:20.259153 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-05 23:06:20.259159 | orchestrator | Saturday 05 July 2025 23:02:05 +0000 (0:00:00.309) 0:06:52.895 ********* 2025-07-05 23:06:20.259165 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.259171 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.259182 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.259188 | orchestrator | 2025-07-05 23:06:20.259194 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-05 23:06:20.259201 | orchestrator | Saturday 05 July 2025 23:02:06 +0000 (0:00:00.569) 0:06:53.464 ********* 2025-07-05 23:06:20.259207 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.259213 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.259220 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.259226 | orchestrator | 2025-07-05 23:06:20.259233 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-07-05 23:06:20.259239 | orchestrator | Saturday 05 July 2025 23:02:06 +0000 (0:00:00.645) 0:06:54.110 ********* 2025-07-05 23:06:20.259245 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.259252 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.259258 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.259264 | orchestrator | 2025-07-05 23:06:20.259270 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-07-05 23:06:20.259276 | orchestrator | Saturday 05 July 2025 23:02:07 +0000 (0:00:00.343) 0:06:54.453 ********* 2025-07-05 23:06:20.259283 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-05 23:06:20.259289 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-05 23:06:20.259296 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-05 23:06:20.259302 | orchestrator | 2025-07-05 23:06:20.259308 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-07-05 23:06:20.259315 | orchestrator | Saturday 05 July 2025 23:02:07 +0000 (0:00:00.937) 0:06:55.391 ********* 2025-07-05 23:06:20.259321 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.259327 | orchestrator | 2025-07-05 23:06:20.259333 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-07-05 23:06:20.259340 | orchestrator | Saturday 05 July 2025 23:02:08 +0000 (0:00:00.776) 0:06:56.167 ********* 2025-07-05 23:06:20.259346 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.259352 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.259359 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.259365 | orchestrator | 2025-07-05 23:06:20.259371 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-07-05 23:06:20.259378 | orchestrator | Saturday 05 July 2025 23:02:09 +0000 (0:00:00.316) 0:06:56.484 ********* 2025-07-05 23:06:20.259384 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.259390 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.259396 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.259402 | orchestrator | 2025-07-05 23:06:20.259409 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-07-05 23:06:20.259415 | orchestrator | Saturday 05 July 2025 23:02:09 +0000 (0:00:00.291) 0:06:56.776 ********* 2025-07-05 23:06:20.259421 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.259427 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.259434 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.259440 | orchestrator | 2025-07-05 23:06:20.259446 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-07-05 23:06:20.259453 | orchestrator | Saturday 05 July 2025 23:02:10 +0000 (0:00:00.937) 0:06:57.713 ********* 2025-07-05 23:06:20.259459 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.259465 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.259471 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.259477 | orchestrator | 2025-07-05 23:06:20.259484 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-07-05 23:06:20.259490 | orchestrator | Saturday 05 July 2025 23:02:10 +0000 (0:00:00.354) 0:06:58.068 ********* 2025-07-05 23:06:20.259496 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-05 23:06:20.259506 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-05 23:06:20.259512 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-05 23:06:20.259519 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-05 23:06:20.259531 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-05 23:06:20.259537 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-05 23:06:20.259543 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-05 23:06:20.259550 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-05 23:06:20.259556 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-05 23:06:20.259562 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-05 23:06:20.259568 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-05 23:06:20.259578 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-05 23:06:20.259584 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-05 23:06:20.259590 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-05 23:06:20.259597 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-05 23:06:20.259603 | orchestrator | 2025-07-05 23:06:20.259609 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-07-05 23:06:20.259615 | orchestrator | Saturday 05 July 2025 23:02:13 +0000 (0:00:03.166) 0:07:01.234 ********* 2025-07-05 23:06:20.259641 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.259654 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.259664 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.259674 | orchestrator | 2025-07-05 23:06:20.259685 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-07-05 23:06:20.259692 | orchestrator | Saturday 05 July 2025 23:02:14 +0000 (0:00:00.286) 0:07:01.521 ********* 2025-07-05 23:06:20.259698 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.259704 | orchestrator | 2025-07-05 23:06:20.259711 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-07-05 23:06:20.259717 | orchestrator | Saturday 05 July 2025 23:02:14 +0000 (0:00:00.746) 0:07:02.268 ********* 2025-07-05 23:06:20.259723 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-05 23:06:20.259729 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-05 23:06:20.259735 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-05 23:06:20.259742 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-07-05 23:06:20.259748 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-07-05 23:06:20.259754 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-07-05 23:06:20.259761 | orchestrator | 2025-07-05 23:06:20.259767 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-07-05 23:06:20.259774 | orchestrator | Saturday 05 July 2025 23:02:15 +0000 (0:00:00.981) 0:07:03.249 ********* 2025-07-05 23:06:20.259780 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.259786 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-05 23:06:20.259793 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-05 23:06:20.259799 | orchestrator | 2025-07-05 23:06:20.259805 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-07-05 23:06:20.259817 | orchestrator | Saturday 05 July 2025 23:02:17 +0000 (0:00:02.152) 0:07:05.402 ********* 2025-07-05 23:06:20.259823 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-05 23:06:20.259830 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-05 23:06:20.259836 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.259842 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-05 23:06:20.259848 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-05 23:06:20.259855 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.259861 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-05 23:06:20.259867 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-05 23:06:20.259873 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.259879 | orchestrator | 2025-07-05 23:06:20.259886 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-07-05 23:06:20.259892 | orchestrator | Saturday 05 July 2025 23:02:19 +0000 (0:00:01.191) 0:07:06.594 ********* 2025-07-05 23:06:20.259898 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:06:20.259904 | orchestrator | 2025-07-05 23:06:20.259910 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-07-05 23:06:20.259917 | orchestrator | Saturday 05 July 2025 23:02:21 +0000 (0:00:02.599) 0:07:09.193 ********* 2025-07-05 23:06:20.259923 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.259929 | orchestrator | 2025-07-05 23:06:20.259935 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-07-05 23:06:20.259942 | orchestrator | Saturday 05 July 2025 23:02:22 +0000 (0:00:00.527) 0:07:09.720 ********* 2025-07-05 23:06:20.259948 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-469f88b0-11f8-5147-93f6-bf0afec867dc', 'data_vg': 'ceph-469f88b0-11f8-5147-93f6-bf0afec867dc'}) 2025-07-05 23:06:20.259956 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9b5adb4f-945c-5107-b1d3-f691d6050e0c', 'data_vg': 'ceph-9b5adb4f-945c-5107-b1d3-f691d6050e0c'}) 2025-07-05 23:06:20.259967 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8de564a6-401f-59e2-a445-234b3be175ce', 'data_vg': 'ceph-8de564a6-401f-59e2-a445-234b3be175ce'}) 2025-07-05 23:06:20.259974 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2969909f-2c17-514e-91b3-dec9da8cf58e', 'data_vg': 'ceph-2969909f-2c17-514e-91b3-dec9da8cf58e'}) 2025-07-05 23:06:20.259980 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-24fdde66-e3ee-586c-8774-3b73abfeacc0', 'data_vg': 'ceph-24fdde66-e3ee-586c-8774-3b73abfeacc0'}) 2025-07-05 23:06:20.259986 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2634d3d6-ac41-59e6-b3da-1ade7ee25156', 'data_vg': 'ceph-2634d3d6-ac41-59e6-b3da-1ade7ee25156'}) 2025-07-05 23:06:20.259993 | orchestrator | 2025-07-05 23:06:20.260003 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-07-05 23:06:20.260009 | orchestrator | Saturday 05 July 2025 23:03:03 +0000 (0:00:41.009) 0:07:50.730 ********* 2025-07-05 23:06:20.260015 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.260022 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.260028 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.260035 | orchestrator | 2025-07-05 23:06:20.260041 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-07-05 23:06:20.260047 | orchestrator | Saturday 05 July 2025 23:03:03 +0000 (0:00:00.504) 0:07:51.235 ********* 2025-07-05 23:06:20.260054 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.260060 | orchestrator | 2025-07-05 23:06:20.260066 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-07-05 23:06:20.260072 | orchestrator | Saturday 05 July 2025 23:03:04 +0000 (0:00:00.545) 0:07:51.781 ********* 2025-07-05 23:06:20.260079 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.260089 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.260096 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.260102 | orchestrator | 2025-07-05 23:06:20.260108 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-07-05 23:06:20.260115 | orchestrator | Saturday 05 July 2025 23:03:04 +0000 (0:00:00.637) 0:07:52.418 ********* 2025-07-05 23:06:20.260121 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.260127 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.260134 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.260140 | orchestrator | 2025-07-05 23:06:20.260146 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-07-05 23:06:20.260152 | orchestrator | Saturday 05 July 2025 23:03:07 +0000 (0:00:02.828) 0:07:55.247 ********* 2025-07-05 23:06:20.260159 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.260165 | orchestrator | 2025-07-05 23:06:20.260171 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-07-05 23:06:20.260177 | orchestrator | Saturday 05 July 2025 23:03:08 +0000 (0:00:00.513) 0:07:55.761 ********* 2025-07-05 23:06:20.260184 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.260191 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.260197 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.260204 | orchestrator | 2025-07-05 23:06:20.260211 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-07-05 23:06:20.260217 | orchestrator | Saturday 05 July 2025 23:03:09 +0000 (0:00:01.198) 0:07:56.959 ********* 2025-07-05 23:06:20.260224 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.260231 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.260238 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.260244 | orchestrator | 2025-07-05 23:06:20.260251 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-07-05 23:06:20.260261 | orchestrator | Saturday 05 July 2025 23:03:10 +0000 (0:00:01.378) 0:07:58.337 ********* 2025-07-05 23:06:20.260271 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.260287 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.260302 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.260312 | orchestrator | 2025-07-05 23:06:20.260322 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-07-05 23:06:20.260333 | orchestrator | Saturday 05 July 2025 23:03:12 +0000 (0:00:01.655) 0:07:59.993 ********* 2025-07-05 23:06:20.260344 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.260354 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.260363 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.260373 | orchestrator | 2025-07-05 23:06:20.260384 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-07-05 23:06:20.260394 | orchestrator | Saturday 05 July 2025 23:03:12 +0000 (0:00:00.317) 0:08:00.311 ********* 2025-07-05 23:06:20.260406 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.260416 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.260427 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.260438 | orchestrator | 2025-07-05 23:06:20.260449 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-07-05 23:06:20.260461 | orchestrator | Saturday 05 July 2025 23:03:13 +0000 (0:00:00.353) 0:08:00.664 ********* 2025-07-05 23:06:20.260472 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-07-05 23:06:20.260483 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-07-05 23:06:20.260493 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-07-05 23:06:20.260504 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-05 23:06:20.260515 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-07-05 23:06:20.260527 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-07-05 23:06:20.260537 | orchestrator | 2025-07-05 23:06:20.260548 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-07-05 23:06:20.260559 | orchestrator | Saturday 05 July 2025 23:03:14 +0000 (0:00:01.211) 0:08:01.875 ********* 2025-07-05 23:06:20.260574 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-07-05 23:06:20.260580 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-07-05 23:06:20.260587 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-07-05 23:06:20.260594 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-05 23:06:20.260607 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-05 23:06:20.260614 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-07-05 23:06:20.260644 | orchestrator | 2025-07-05 23:06:20.260652 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-07-05 23:06:20.260659 | orchestrator | Saturday 05 July 2025 23:03:16 +0000 (0:00:02.072) 0:08:03.948 ********* 2025-07-05 23:06:20.260666 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-07-05 23:06:20.260673 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-07-05 23:06:20.260679 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-07-05 23:06:20.260686 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-05 23:06:20.260693 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-05 23:06:20.260700 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-07-05 23:06:20.260707 | orchestrator | 2025-07-05 23:06:20.260713 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-07-05 23:06:20.260725 | orchestrator | Saturday 05 July 2025 23:03:19 +0000 (0:00:03.454) 0:08:07.403 ********* 2025-07-05 23:06:20.260732 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.260739 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.260746 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:06:20.260753 | orchestrator | 2025-07-05 23:06:20.260759 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-07-05 23:06:20.260766 | orchestrator | Saturday 05 July 2025 23:03:22 +0000 (0:00:02.480) 0:08:09.883 ********* 2025-07-05 23:06:20.260773 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.260779 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.260786 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-07-05 23:06:20.260793 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:06:20.260800 | orchestrator | 2025-07-05 23:06:20.260807 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-07-05 23:06:20.260814 | orchestrator | Saturday 05 July 2025 23:03:35 +0000 (0:00:13.048) 0:08:22.931 ********* 2025-07-05 23:06:20.260820 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.260827 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.260834 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.260840 | orchestrator | 2025-07-05 23:06:20.260847 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-05 23:06:20.260854 | orchestrator | Saturday 05 July 2025 23:03:36 +0000 (0:00:00.820) 0:08:23.752 ********* 2025-07-05 23:06:20.260861 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.260867 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.260874 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.260881 | orchestrator | 2025-07-05 23:06:20.260888 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-05 23:06:20.260895 | orchestrator | Saturday 05 July 2025 23:03:36 +0000 (0:00:00.652) 0:08:24.404 ********* 2025-07-05 23:06:20.260902 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.260908 | orchestrator | 2025-07-05 23:06:20.260915 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-05 23:06:20.260922 | orchestrator | Saturday 05 July 2025 23:03:37 +0000 (0:00:00.565) 0:08:24.969 ********* 2025-07-05 23:06:20.260929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.260935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.260942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.260954 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.260961 | orchestrator | 2025-07-05 23:06:20.260968 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-05 23:06:20.260975 | orchestrator | Saturday 05 July 2025 23:03:37 +0000 (0:00:00.447) 0:08:25.417 ********* 2025-07-05 23:06:20.260982 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.260988 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.260995 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.261002 | orchestrator | 2025-07-05 23:06:20.261009 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-05 23:06:20.261015 | orchestrator | Saturday 05 July 2025 23:03:38 +0000 (0:00:00.316) 0:08:25.734 ********* 2025-07-05 23:06:20.261022 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261029 | orchestrator | 2025-07-05 23:06:20.261036 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-05 23:06:20.261042 | orchestrator | Saturday 05 July 2025 23:03:38 +0000 (0:00:00.223) 0:08:25.957 ********* 2025-07-05 23:06:20.261049 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261056 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.261063 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.261069 | orchestrator | 2025-07-05 23:06:20.261076 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-05 23:06:20.261083 | orchestrator | Saturday 05 July 2025 23:03:39 +0000 (0:00:00.553) 0:08:26.510 ********* 2025-07-05 23:06:20.261090 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261096 | orchestrator | 2025-07-05 23:06:20.261103 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-05 23:06:20.261110 | orchestrator | Saturday 05 July 2025 23:03:39 +0000 (0:00:00.265) 0:08:26.776 ********* 2025-07-05 23:06:20.261117 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261123 | orchestrator | 2025-07-05 23:06:20.261130 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-05 23:06:20.261137 | orchestrator | Saturday 05 July 2025 23:03:39 +0000 (0:00:00.268) 0:08:27.045 ********* 2025-07-05 23:06:20.261144 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261151 | orchestrator | 2025-07-05 23:06:20.261157 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-05 23:06:20.261164 | orchestrator | Saturday 05 July 2025 23:03:39 +0000 (0:00:00.130) 0:08:27.176 ********* 2025-07-05 23:06:20.261171 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261178 | orchestrator | 2025-07-05 23:06:20.261188 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-05 23:06:20.261195 | orchestrator | Saturday 05 July 2025 23:03:39 +0000 (0:00:00.230) 0:08:27.406 ********* 2025-07-05 23:06:20.261202 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261209 | orchestrator | 2025-07-05 23:06:20.261216 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-05 23:06:20.261222 | orchestrator | Saturday 05 July 2025 23:03:40 +0000 (0:00:00.237) 0:08:27.644 ********* 2025-07-05 23:06:20.261229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.261236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.261243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.261249 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261256 | orchestrator | 2025-07-05 23:06:20.261267 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-05 23:06:20.261274 | orchestrator | Saturday 05 July 2025 23:03:40 +0000 (0:00:00.400) 0:08:28.044 ********* 2025-07-05 23:06:20.261280 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261287 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.261294 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.261301 | orchestrator | 2025-07-05 23:06:20.261307 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-05 23:06:20.261319 | orchestrator | Saturday 05 July 2025 23:03:40 +0000 (0:00:00.296) 0:08:28.340 ********* 2025-07-05 23:06:20.261326 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261333 | orchestrator | 2025-07-05 23:06:20.261339 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-05 23:06:20.261346 | orchestrator | Saturday 05 July 2025 23:03:41 +0000 (0:00:00.768) 0:08:29.109 ********* 2025-07-05 23:06:20.261353 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261360 | orchestrator | 2025-07-05 23:06:20.261366 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-07-05 23:06:20.261373 | orchestrator | 2025-07-05 23:06:20.261380 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-05 23:06:20.261387 | orchestrator | Saturday 05 July 2025 23:03:42 +0000 (0:00:00.632) 0:08:29.741 ********* 2025-07-05 23:06:20.261394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.261403 | orchestrator | 2025-07-05 23:06:20.261409 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-05 23:06:20.261416 | orchestrator | Saturday 05 July 2025 23:03:43 +0000 (0:00:01.164) 0:08:30.905 ********* 2025-07-05 23:06:20.261423 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.261430 | orchestrator | 2025-07-05 23:06:20.261437 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-05 23:06:20.261444 | orchestrator | Saturday 05 July 2025 23:03:44 +0000 (0:00:01.221) 0:08:32.127 ********* 2025-07-05 23:06:20.261451 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261457 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.261464 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.261471 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.261478 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.261484 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.261491 | orchestrator | 2025-07-05 23:06:20.261498 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-05 23:06:20.261505 | orchestrator | Saturday 05 July 2025 23:03:45 +0000 (0:00:01.208) 0:08:33.336 ********* 2025-07-05 23:06:20.261511 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.261518 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.261525 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.261532 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.261538 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.261545 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.261552 | orchestrator | 2025-07-05 23:06:20.261559 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-05 23:06:20.261565 | orchestrator | Saturday 05 July 2025 23:03:46 +0000 (0:00:00.766) 0:08:34.102 ********* 2025-07-05 23:06:20.261572 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.261579 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.261586 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.261592 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.261599 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.261606 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.261612 | orchestrator | 2025-07-05 23:06:20.261619 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-05 23:06:20.261670 | orchestrator | Saturday 05 July 2025 23:03:47 +0000 (0:00:00.900) 0:08:35.002 ********* 2025-07-05 23:06:20.261678 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.261684 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.261691 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.261698 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.261704 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.261711 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.261725 | orchestrator | 2025-07-05 23:06:20.261732 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-05 23:06:20.261739 | orchestrator | Saturday 05 July 2025 23:03:48 +0000 (0:00:00.715) 0:08:35.718 ********* 2025-07-05 23:06:20.261745 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261752 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.261759 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.261766 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.261772 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.261779 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.261786 | orchestrator | 2025-07-05 23:06:20.261793 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-05 23:06:20.261800 | orchestrator | Saturday 05 July 2025 23:03:49 +0000 (0:00:01.208) 0:08:36.927 ********* 2025-07-05 23:06:20.261807 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261813 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.261825 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.261832 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.261839 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.261845 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.261852 | orchestrator | 2025-07-05 23:06:20.261859 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-05 23:06:20.261866 | orchestrator | Saturday 05 July 2025 23:03:50 +0000 (0:00:00.645) 0:08:37.572 ********* 2025-07-05 23:06:20.261872 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.261879 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.261886 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.261893 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.261899 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.261906 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.261913 | orchestrator | 2025-07-05 23:06:20.261924 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-05 23:06:20.261931 | orchestrator | Saturday 05 July 2025 23:03:50 +0000 (0:00:00.853) 0:08:38.425 ********* 2025-07-05 23:06:20.261938 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.261944 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.261951 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.261958 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.261964 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.261971 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.261978 | orchestrator | 2025-07-05 23:06:20.261985 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-05 23:06:20.261991 | orchestrator | Saturday 05 July 2025 23:03:52 +0000 (0:00:01.153) 0:08:39.579 ********* 2025-07-05 23:06:20.261998 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.262005 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.262012 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.262060 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.262068 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.262074 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.262081 | orchestrator | 2025-07-05 23:06:20.262088 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-05 23:06:20.262095 | orchestrator | Saturday 05 July 2025 23:03:53 +0000 (0:00:01.213) 0:08:40.792 ********* 2025-07-05 23:06:20.262101 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.262108 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.262115 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.262121 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.262128 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.262135 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.262141 | orchestrator | 2025-07-05 23:06:20.262148 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-05 23:06:20.262155 | orchestrator | Saturday 05 July 2025 23:03:53 +0000 (0:00:00.567) 0:08:41.359 ********* 2025-07-05 23:06:20.262166 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.262173 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.262180 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.262186 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.262193 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.262200 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.262207 | orchestrator | 2025-07-05 23:06:20.262214 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-05 23:06:20.262220 | orchestrator | Saturday 05 July 2025 23:03:54 +0000 (0:00:00.805) 0:08:42.165 ********* 2025-07-05 23:06:20.262227 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.262234 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.262241 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.262247 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.262254 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.262261 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.262267 | orchestrator | 2025-07-05 23:06:20.262274 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-05 23:06:20.262281 | orchestrator | Saturday 05 July 2025 23:03:55 +0000 (0:00:00.599) 0:08:42.764 ********* 2025-07-05 23:06:20.262288 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.262295 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.262301 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.262308 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.262315 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.262322 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.262328 | orchestrator | 2025-07-05 23:06:20.262335 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-05 23:06:20.262342 | orchestrator | Saturday 05 July 2025 23:03:56 +0000 (0:00:00.843) 0:08:43.607 ********* 2025-07-05 23:06:20.262349 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.262355 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.262362 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.262369 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.262375 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.262382 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.262389 | orchestrator | 2025-07-05 23:06:20.262395 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-05 23:06:20.262402 | orchestrator | Saturday 05 July 2025 23:03:56 +0000 (0:00:00.621) 0:08:44.229 ********* 2025-07-05 23:06:20.262409 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.262416 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.262426 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.262439 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.262451 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.262462 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.262473 | orchestrator | 2025-07-05 23:06:20.262485 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-05 23:06:20.262492 | orchestrator | Saturday 05 July 2025 23:03:57 +0000 (0:00:00.821) 0:08:45.051 ********* 2025-07-05 23:06:20.262499 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.262505 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.262512 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.262519 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:06:20.262525 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:06:20.262532 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:06:20.262539 | orchestrator | 2025-07-05 23:06:20.262545 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-05 23:06:20.262552 | orchestrator | Saturday 05 July 2025 23:03:58 +0000 (0:00:00.602) 0:08:45.653 ********* 2025-07-05 23:06:20.262565 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.262572 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.262580 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.262587 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.262600 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.262607 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.262614 | orchestrator | 2025-07-05 23:06:20.262641 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-05 23:06:20.262649 | orchestrator | Saturday 05 July 2025 23:03:58 +0000 (0:00:00.785) 0:08:46.439 ********* 2025-07-05 23:06:20.262657 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.262664 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.262671 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.262679 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.262686 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.262693 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.262700 | orchestrator | 2025-07-05 23:06:20.262712 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-05 23:06:20.262719 | orchestrator | Saturday 05 July 2025 23:03:59 +0000 (0:00:00.666) 0:08:47.105 ********* 2025-07-05 23:06:20.262727 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.262734 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.262741 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.262748 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.262756 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.262763 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.262770 | orchestrator | 2025-07-05 23:06:20.262777 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-07-05 23:06:20.262785 | orchestrator | Saturday 05 July 2025 23:04:00 +0000 (0:00:01.254) 0:08:48.359 ********* 2025-07-05 23:06:20.262792 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:06:20.262799 | orchestrator | 2025-07-05 23:06:20.262807 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-07-05 23:06:20.262814 | orchestrator | Saturday 05 July 2025 23:04:05 +0000 (0:00:04.134) 0:08:52.493 ********* 2025-07-05 23:06:20.262821 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:06:20.262828 | orchestrator | 2025-07-05 23:06:20.262836 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-07-05 23:06:20.262843 | orchestrator | Saturday 05 July 2025 23:04:07 +0000 (0:00:02.038) 0:08:54.532 ********* 2025-07-05 23:06:20.262850 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.262858 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.262865 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.262872 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.262879 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.262887 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.262894 | orchestrator | 2025-07-05 23:06:20.262901 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-07-05 23:06:20.262909 | orchestrator | Saturday 05 July 2025 23:04:09 +0000 (0:00:01.955) 0:08:56.488 ********* 2025-07-05 23:06:20.262916 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.262923 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.262930 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.262938 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.262945 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.262952 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.262959 | orchestrator | 2025-07-05 23:06:20.262966 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-07-05 23:06:20.262974 | orchestrator | Saturday 05 July 2025 23:04:10 +0000 (0:00:01.065) 0:08:57.554 ********* 2025-07-05 23:06:20.262981 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.262990 | orchestrator | 2025-07-05 23:06:20.262997 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-07-05 23:06:20.263004 | orchestrator | Saturday 05 July 2025 23:04:11 +0000 (0:00:01.306) 0:08:58.860 ********* 2025-07-05 23:06:20.263012 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.263023 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.263030 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.263038 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.263045 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.263052 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.263059 | orchestrator | 2025-07-05 23:06:20.263067 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-07-05 23:06:20.263074 | orchestrator | Saturday 05 July 2025 23:04:13 +0000 (0:00:02.064) 0:09:00.925 ********* 2025-07-05 23:06:20.263081 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.263089 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.263096 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.263103 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.263110 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.263117 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.263125 | orchestrator | 2025-07-05 23:06:20.263132 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-07-05 23:06:20.263139 | orchestrator | Saturday 05 July 2025 23:04:17 +0000 (0:00:03.945) 0:09:04.871 ********* 2025-07-05 23:06:20.263147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:06:20.263154 | orchestrator | 2025-07-05 23:06:20.263161 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-07-05 23:06:20.263168 | orchestrator | Saturday 05 July 2025 23:04:18 +0000 (0:00:01.317) 0:09:06.188 ********* 2025-07-05 23:06:20.263176 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.263183 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.263190 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.263198 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.263205 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.263212 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.263219 | orchestrator | 2025-07-05 23:06:20.263227 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-07-05 23:06:20.263234 | orchestrator | Saturday 05 July 2025 23:04:19 +0000 (0:00:00.928) 0:09:07.117 ********* 2025-07-05 23:06:20.263246 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.263254 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.263261 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.263269 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:06:20.263276 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:06:20.263283 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:06:20.263291 | orchestrator | 2025-07-05 23:06:20.263298 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-07-05 23:06:20.263305 | orchestrator | Saturday 05 July 2025 23:04:22 +0000 (0:00:02.480) 0:09:09.597 ********* 2025-07-05 23:06:20.263313 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.263320 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.263327 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.263334 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:06:20.263342 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:06:20.263349 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:06:20.263356 | orchestrator | 2025-07-05 23:06:20.263367 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-07-05 23:06:20.263375 | orchestrator | 2025-07-05 23:06:20.263382 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-05 23:06:20.263390 | orchestrator | Saturday 05 July 2025 23:04:23 +0000 (0:00:01.086) 0:09:10.684 ********* 2025-07-05 23:06:20.263397 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.263405 | orchestrator | 2025-07-05 23:06:20.263412 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-05 23:06:20.263419 | orchestrator | Saturday 05 July 2025 23:04:23 +0000 (0:00:00.500) 0:09:11.184 ********* 2025-07-05 23:06:20.263432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.263439 | orchestrator | 2025-07-05 23:06:20.263447 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-05 23:06:20.263454 | orchestrator | Saturday 05 July 2025 23:04:24 +0000 (0:00:00.774) 0:09:11.959 ********* 2025-07-05 23:06:20.263461 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.263469 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.263476 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.263483 | orchestrator | 2025-07-05 23:06:20.263490 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-05 23:06:20.263498 | orchestrator | Saturday 05 July 2025 23:04:24 +0000 (0:00:00.368) 0:09:12.328 ********* 2025-07-05 23:06:20.263505 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.263512 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.263520 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.263527 | orchestrator | 2025-07-05 23:06:20.263534 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-05 23:06:20.263542 | orchestrator | Saturday 05 July 2025 23:04:25 +0000 (0:00:00.712) 0:09:13.041 ********* 2025-07-05 23:06:20.263549 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.263556 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.263564 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.263571 | orchestrator | 2025-07-05 23:06:20.263578 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-05 23:06:20.263585 | orchestrator | Saturday 05 July 2025 23:04:26 +0000 (0:00:00.996) 0:09:14.038 ********* 2025-07-05 23:06:20.263593 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.263600 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.263607 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.263614 | orchestrator | 2025-07-05 23:06:20.263640 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-05 23:06:20.263651 | orchestrator | Saturday 05 July 2025 23:04:27 +0000 (0:00:00.738) 0:09:14.777 ********* 2025-07-05 23:06:20.263659 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.263666 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.263673 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.263681 | orchestrator | 2025-07-05 23:06:20.263688 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-05 23:06:20.263695 | orchestrator | Saturday 05 July 2025 23:04:27 +0000 (0:00:00.324) 0:09:15.101 ********* 2025-07-05 23:06:20.263703 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.263710 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.263718 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.263725 | orchestrator | 2025-07-05 23:06:20.263732 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-05 23:06:20.263740 | orchestrator | Saturday 05 July 2025 23:04:27 +0000 (0:00:00.319) 0:09:15.420 ********* 2025-07-05 23:06:20.263747 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.263754 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.263762 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.263769 | orchestrator | 2025-07-05 23:06:20.263776 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-05 23:06:20.263784 | orchestrator | Saturday 05 July 2025 23:04:28 +0000 (0:00:00.578) 0:09:15.999 ********* 2025-07-05 23:06:20.263791 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.263798 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.263805 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.263813 | orchestrator | 2025-07-05 23:06:20.263820 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-05 23:06:20.263827 | orchestrator | Saturday 05 July 2025 23:04:29 +0000 (0:00:00.787) 0:09:16.787 ********* 2025-07-05 23:06:20.263835 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.263842 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.263849 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.263860 | orchestrator | 2025-07-05 23:06:20.263869 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-05 23:06:20.263882 | orchestrator | Saturday 05 July 2025 23:04:30 +0000 (0:00:00.822) 0:09:17.610 ********* 2025-07-05 23:06:20.263894 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.263905 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.263916 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.263928 | orchestrator | 2025-07-05 23:06:20.263938 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-05 23:06:20.263950 | orchestrator | Saturday 05 July 2025 23:04:30 +0000 (0:00:00.349) 0:09:17.959 ********* 2025-07-05 23:06:20.263969 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.263982 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.263990 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.263997 | orchestrator | 2025-07-05 23:06:20.264004 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-05 23:06:20.264012 | orchestrator | Saturday 05 July 2025 23:04:31 +0000 (0:00:00.558) 0:09:18.518 ********* 2025-07-05 23:06:20.264019 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.264026 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.264034 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.264041 | orchestrator | 2025-07-05 23:06:20.264049 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-05 23:06:20.264056 | orchestrator | Saturday 05 July 2025 23:04:31 +0000 (0:00:00.335) 0:09:18.853 ********* 2025-07-05 23:06:20.264063 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.264070 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.264078 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.264085 | orchestrator | 2025-07-05 23:06:20.264097 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-05 23:06:20.264105 | orchestrator | Saturday 05 July 2025 23:04:31 +0000 (0:00:00.355) 0:09:19.208 ********* 2025-07-05 23:06:20.264112 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.264120 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.264127 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.264134 | orchestrator | 2025-07-05 23:06:20.264142 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-05 23:06:20.264149 | orchestrator | Saturday 05 July 2025 23:04:32 +0000 (0:00:00.409) 0:09:19.618 ********* 2025-07-05 23:06:20.264157 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.264164 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.264171 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.264179 | orchestrator | 2025-07-05 23:06:20.264186 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-05 23:06:20.264193 | orchestrator | Saturday 05 July 2025 23:04:32 +0000 (0:00:00.707) 0:09:20.326 ********* 2025-07-05 23:06:20.264200 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.264208 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.264215 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.264222 | orchestrator | 2025-07-05 23:06:20.264230 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-05 23:06:20.264237 | orchestrator | Saturday 05 July 2025 23:04:33 +0000 (0:00:00.391) 0:09:20.718 ********* 2025-07-05 23:06:20.264245 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.264252 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.264259 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.264266 | orchestrator | 2025-07-05 23:06:20.264274 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-05 23:06:20.264281 | orchestrator | Saturday 05 July 2025 23:04:33 +0000 (0:00:00.325) 0:09:21.044 ********* 2025-07-05 23:06:20.264288 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.264296 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.264303 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.264310 | orchestrator | 2025-07-05 23:06:20.264318 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-05 23:06:20.264331 | orchestrator | Saturday 05 July 2025 23:04:34 +0000 (0:00:00.412) 0:09:21.456 ********* 2025-07-05 23:06:20.264338 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.264345 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.264353 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.264360 | orchestrator | 2025-07-05 23:06:20.264368 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-07-05 23:06:20.264375 | orchestrator | Saturday 05 July 2025 23:04:34 +0000 (0:00:00.834) 0:09:22.290 ********* 2025-07-05 23:06:20.264382 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.264390 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.264397 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-07-05 23:06:20.264405 | orchestrator | 2025-07-05 23:06:20.264412 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-07-05 23:06:20.264419 | orchestrator | Saturday 05 July 2025 23:04:35 +0000 (0:00:00.427) 0:09:22.718 ********* 2025-07-05 23:06:20.264427 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:06:20.264434 | orchestrator | 2025-07-05 23:06:20.264441 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-07-05 23:06:20.264449 | orchestrator | Saturday 05 July 2025 23:04:37 +0000 (0:00:02.205) 0:09:24.924 ********* 2025-07-05 23:06:20.264458 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-07-05 23:06:20.264467 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.264474 | orchestrator | 2025-07-05 23:06:20.264482 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-07-05 23:06:20.264489 | orchestrator | Saturday 05 July 2025 23:04:37 +0000 (0:00:00.204) 0:09:25.128 ********* 2025-07-05 23:06:20.264498 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-05 23:06:20.264511 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-05 23:06:20.264519 | orchestrator | 2025-07-05 23:06:20.264527 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-07-05 23:06:20.264538 | orchestrator | Saturday 05 July 2025 23:04:46 +0000 (0:00:08.828) 0:09:33.957 ********* 2025-07-05 23:06:20.264546 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:06:20.264553 | orchestrator | 2025-07-05 23:06:20.264560 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-07-05 23:06:20.264568 | orchestrator | Saturday 05 July 2025 23:04:50 +0000 (0:00:04.289) 0:09:38.246 ********* 2025-07-05 23:06:20.264575 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.264583 | orchestrator | 2025-07-05 23:06:20.264590 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-07-05 23:06:20.264598 | orchestrator | Saturday 05 July 2025 23:04:51 +0000 (0:00:00.690) 0:09:38.937 ********* 2025-07-05 23:06:20.264605 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-05 23:06:20.264616 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-05 23:06:20.264641 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-05 23:06:20.264649 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-07-05 23:06:20.264661 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-07-05 23:06:20.264669 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-07-05 23:06:20.264676 | orchestrator | 2025-07-05 23:06:20.264683 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-07-05 23:06:20.264691 | orchestrator | Saturday 05 July 2025 23:04:52 +0000 (0:00:01.141) 0:09:40.078 ********* 2025-07-05 23:06:20.264698 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.264708 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-05 23:06:20.264721 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-05 23:06:20.264732 | orchestrator | 2025-07-05 23:06:20.264744 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-07-05 23:06:20.264755 | orchestrator | Saturday 05 July 2025 23:04:54 +0000 (0:00:02.099) 0:09:42.178 ********* 2025-07-05 23:06:20.264766 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-05 23:06:20.264779 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-05 23:06:20.264791 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.264803 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-05 23:06:20.264815 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-05 23:06:20.264828 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.264840 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-05 23:06:20.264853 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-05 23:06:20.264864 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.264872 | orchestrator | 2025-07-05 23:06:20.264880 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-07-05 23:06:20.264888 | orchestrator | Saturday 05 July 2025 23:04:56 +0000 (0:00:01.477) 0:09:43.656 ********* 2025-07-05 23:06:20.264896 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.264904 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.264912 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.264920 | orchestrator | 2025-07-05 23:06:20.264928 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-07-05 23:06:20.264936 | orchestrator | Saturday 05 July 2025 23:04:58 +0000 (0:00:02.565) 0:09:46.221 ********* 2025-07-05 23:06:20.264944 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.264952 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.264960 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.264967 | orchestrator | 2025-07-05 23:06:20.264975 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-07-05 23:06:20.264983 | orchestrator | Saturday 05 July 2025 23:04:59 +0000 (0:00:00.323) 0:09:46.544 ********* 2025-07-05 23:06:20.264991 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.265000 | orchestrator | 2025-07-05 23:06:20.265014 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-07-05 23:06:20.265028 | orchestrator | Saturday 05 July 2025 23:04:59 +0000 (0:00:00.796) 0:09:47.340 ********* 2025-07-05 23:06:20.265040 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.265054 | orchestrator | 2025-07-05 23:06:20.265067 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-07-05 23:06:20.265081 | orchestrator | Saturday 05 July 2025 23:05:00 +0000 (0:00:00.610) 0:09:47.951 ********* 2025-07-05 23:06:20.265094 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.265108 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.265121 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.265135 | orchestrator | 2025-07-05 23:06:20.265148 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-07-05 23:06:20.265161 | orchestrator | Saturday 05 July 2025 23:05:01 +0000 (0:00:01.321) 0:09:49.273 ********* 2025-07-05 23:06:20.265180 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.265188 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.265196 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.265204 | orchestrator | 2025-07-05 23:06:20.265212 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-07-05 23:06:20.265220 | orchestrator | Saturday 05 July 2025 23:05:03 +0000 (0:00:01.449) 0:09:50.722 ********* 2025-07-05 23:06:20.265228 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.265236 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.265244 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.265252 | orchestrator | 2025-07-05 23:06:20.265260 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-07-05 23:06:20.265268 | orchestrator | Saturday 05 July 2025 23:05:04 +0000 (0:00:01.700) 0:09:52.422 ********* 2025-07-05 23:06:20.265276 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.265291 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.265299 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.265307 | orchestrator | 2025-07-05 23:06:20.265315 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-07-05 23:06:20.265323 | orchestrator | Saturday 05 July 2025 23:05:06 +0000 (0:00:01.898) 0:09:54.321 ********* 2025-07-05 23:06:20.265331 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.265339 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.265347 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.265355 | orchestrator | 2025-07-05 23:06:20.265363 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-05 23:06:20.265371 | orchestrator | Saturday 05 July 2025 23:05:08 +0000 (0:00:01.540) 0:09:55.861 ********* 2025-07-05 23:06:20.265379 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.265387 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.265395 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.265403 | orchestrator | 2025-07-05 23:06:20.265416 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-05 23:06:20.265424 | orchestrator | Saturday 05 July 2025 23:05:09 +0000 (0:00:00.682) 0:09:56.544 ********* 2025-07-05 23:06:20.265432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.265440 | orchestrator | 2025-07-05 23:06:20.265448 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-05 23:06:20.265456 | orchestrator | Saturday 05 July 2025 23:05:09 +0000 (0:00:00.738) 0:09:57.283 ********* 2025-07-05 23:06:20.265464 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.265472 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.265480 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.265488 | orchestrator | 2025-07-05 23:06:20.265496 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-05 23:06:20.265504 | orchestrator | Saturday 05 July 2025 23:05:10 +0000 (0:00:00.330) 0:09:57.613 ********* 2025-07-05 23:06:20.265512 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.265520 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.265528 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.265535 | orchestrator | 2025-07-05 23:06:20.265543 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-05 23:06:20.265553 | orchestrator | Saturday 05 July 2025 23:05:11 +0000 (0:00:01.173) 0:09:58.787 ********* 2025-07-05 23:06:20.265565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.265583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.265603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.265614 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.265643 | orchestrator | 2025-07-05 23:06:20.265654 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-05 23:06:20.265667 | orchestrator | Saturday 05 July 2025 23:05:12 +0000 (0:00:00.846) 0:09:59.633 ********* 2025-07-05 23:06:20.265692 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.265704 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.265716 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.265728 | orchestrator | 2025-07-05 23:06:20.265740 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-05 23:06:20.265752 | orchestrator | 2025-07-05 23:06:20.265765 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-05 23:06:20.265779 | orchestrator | Saturday 05 July 2025 23:05:13 +0000 (0:00:00.817) 0:10:00.450 ********* 2025-07-05 23:06:20.265788 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.265796 | orchestrator | 2025-07-05 23:06:20.265804 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-05 23:06:20.265812 | orchestrator | Saturday 05 July 2025 23:05:13 +0000 (0:00:00.489) 0:10:00.940 ********* 2025-07-05 23:06:20.265820 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.265828 | orchestrator | 2025-07-05 23:06:20.265836 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-05 23:06:20.265844 | orchestrator | Saturday 05 July 2025 23:05:14 +0000 (0:00:00.733) 0:10:01.674 ********* 2025-07-05 23:06:20.265852 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.265860 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.265868 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.265876 | orchestrator | 2025-07-05 23:06:20.265884 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-05 23:06:20.265892 | orchestrator | Saturday 05 July 2025 23:05:14 +0000 (0:00:00.332) 0:10:02.006 ********* 2025-07-05 23:06:20.265900 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.265908 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.265916 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.265923 | orchestrator | 2025-07-05 23:06:20.265931 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-05 23:06:20.265939 | orchestrator | Saturday 05 July 2025 23:05:15 +0000 (0:00:00.734) 0:10:02.741 ********* 2025-07-05 23:06:20.265947 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.265955 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.265963 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.265971 | orchestrator | 2025-07-05 23:06:20.265979 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-05 23:06:20.265987 | orchestrator | Saturday 05 July 2025 23:05:16 +0000 (0:00:00.721) 0:10:03.462 ********* 2025-07-05 23:06:20.265995 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.266003 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.266011 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.266050 | orchestrator | 2025-07-05 23:06:20.266059 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-05 23:06:20.266067 | orchestrator | Saturday 05 July 2025 23:05:17 +0000 (0:00:01.000) 0:10:04.463 ********* 2025-07-05 23:06:20.266075 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.266083 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.266092 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.266100 | orchestrator | 2025-07-05 23:06:20.266115 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-05 23:06:20.266123 | orchestrator | Saturday 05 July 2025 23:05:17 +0000 (0:00:00.302) 0:10:04.765 ********* 2025-07-05 23:06:20.266131 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.266140 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.266147 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.266155 | orchestrator | 2025-07-05 23:06:20.266163 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-05 23:06:20.266171 | orchestrator | Saturday 05 July 2025 23:05:17 +0000 (0:00:00.319) 0:10:05.085 ********* 2025-07-05 23:06:20.266179 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.266193 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.266201 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.266209 | orchestrator | 2025-07-05 23:06:20.266217 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-05 23:06:20.266230 | orchestrator | Saturday 05 July 2025 23:05:17 +0000 (0:00:00.311) 0:10:05.397 ********* 2025-07-05 23:06:20.266239 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.266246 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.266254 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.266262 | orchestrator | 2025-07-05 23:06:20.266270 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-05 23:06:20.266278 | orchestrator | Saturday 05 July 2025 23:05:18 +0000 (0:00:01.045) 0:10:06.442 ********* 2025-07-05 23:06:20.266286 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.266294 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.266302 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.266310 | orchestrator | 2025-07-05 23:06:20.266321 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-05 23:06:20.266334 | orchestrator | Saturday 05 July 2025 23:05:19 +0000 (0:00:00.733) 0:10:07.176 ********* 2025-07-05 23:06:20.266350 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.266369 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.266381 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.266393 | orchestrator | 2025-07-05 23:06:20.266406 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-05 23:06:20.266418 | orchestrator | Saturday 05 July 2025 23:05:20 +0000 (0:00:00.307) 0:10:07.483 ********* 2025-07-05 23:06:20.266431 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.266444 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.266457 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.266469 | orchestrator | 2025-07-05 23:06:20.266481 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-05 23:06:20.266494 | orchestrator | Saturday 05 July 2025 23:05:20 +0000 (0:00:00.301) 0:10:07.785 ********* 2025-07-05 23:06:20.266507 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.266520 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.266534 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.266547 | orchestrator | 2025-07-05 23:06:20.266559 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-05 23:06:20.266567 | orchestrator | Saturday 05 July 2025 23:05:20 +0000 (0:00:00.587) 0:10:08.373 ********* 2025-07-05 23:06:20.266576 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.266583 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.266591 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.266599 | orchestrator | 2025-07-05 23:06:20.266607 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-05 23:06:20.266615 | orchestrator | Saturday 05 July 2025 23:05:21 +0000 (0:00:00.366) 0:10:08.740 ********* 2025-07-05 23:06:20.266676 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.266686 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.266694 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.266702 | orchestrator | 2025-07-05 23:06:20.266712 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-05 23:06:20.266726 | orchestrator | Saturday 05 July 2025 23:05:21 +0000 (0:00:00.319) 0:10:09.059 ********* 2025-07-05 23:06:20.266734 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.266742 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.266751 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.266758 | orchestrator | 2025-07-05 23:06:20.266766 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-05 23:06:20.266774 | orchestrator | Saturday 05 July 2025 23:05:21 +0000 (0:00:00.278) 0:10:09.338 ********* 2025-07-05 23:06:20.266782 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.266790 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.266798 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.266814 | orchestrator | 2025-07-05 23:06:20.266822 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-05 23:06:20.266830 | orchestrator | Saturday 05 July 2025 23:05:22 +0000 (0:00:00.439) 0:10:09.777 ********* 2025-07-05 23:06:20.266838 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.266846 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.266854 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.266861 | orchestrator | 2025-07-05 23:06:20.266870 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-05 23:06:20.266878 | orchestrator | Saturday 05 July 2025 23:05:22 +0000 (0:00:00.277) 0:10:10.054 ********* 2025-07-05 23:06:20.266885 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.266893 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.266901 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.266909 | orchestrator | 2025-07-05 23:06:20.266917 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-05 23:06:20.266926 | orchestrator | Saturday 05 July 2025 23:05:22 +0000 (0:00:00.295) 0:10:10.350 ********* 2025-07-05 23:06:20.266933 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.266941 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.266949 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.266957 | orchestrator | 2025-07-05 23:06:20.266965 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-07-05 23:06:20.266973 | orchestrator | Saturday 05 July 2025 23:05:23 +0000 (0:00:00.649) 0:10:10.999 ********* 2025-07-05 23:06:20.266981 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.266989 | orchestrator | 2025-07-05 23:06:20.266997 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-05 23:06:20.267011 | orchestrator | Saturday 05 July 2025 23:05:24 +0000 (0:00:00.473) 0:10:11.472 ********* 2025-07-05 23:06:20.267018 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.267025 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-05 23:06:20.267032 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-05 23:06:20.267038 | orchestrator | 2025-07-05 23:06:20.267045 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-05 23:06:20.267052 | orchestrator | Saturday 05 July 2025 23:05:26 +0000 (0:00:02.283) 0:10:13.755 ********* 2025-07-05 23:06:20.267058 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-05 23:06:20.267065 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-05 23:06:20.267072 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.267083 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-05 23:06:20.267090 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-05 23:06:20.267097 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.267104 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-05 23:06:20.267110 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-05 23:06:20.267117 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.267124 | orchestrator | 2025-07-05 23:06:20.267131 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-07-05 23:06:20.267138 | orchestrator | Saturday 05 July 2025 23:05:27 +0000 (0:00:01.159) 0:10:14.915 ********* 2025-07-05 23:06:20.267144 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.267151 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.267158 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.267164 | orchestrator | 2025-07-05 23:06:20.267171 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-07-05 23:06:20.267178 | orchestrator | Saturday 05 July 2025 23:05:27 +0000 (0:00:00.473) 0:10:15.388 ********* 2025-07-05 23:06:20.267185 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.267192 | orchestrator | 2025-07-05 23:06:20.267210 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-07-05 23:06:20.267217 | orchestrator | Saturday 05 July 2025 23:05:28 +0000 (0:00:00.470) 0:10:15.859 ********* 2025-07-05 23:06:20.267224 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.267231 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.267238 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.267245 | orchestrator | 2025-07-05 23:06:20.267252 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-07-05 23:06:20.267259 | orchestrator | Saturday 05 July 2025 23:05:29 +0000 (0:00:00.776) 0:10:16.636 ********* 2025-07-05 23:06:20.267266 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.267273 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-05 23:06:20.267280 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.267287 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-05 23:06:20.267293 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.267300 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-05 23:06:20.267307 | orchestrator | 2025-07-05 23:06:20.267314 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-05 23:06:20.267321 | orchestrator | Saturday 05 July 2025 23:05:33 +0000 (0:00:04.199) 0:10:20.835 ********* 2025-07-05 23:06:20.267328 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.267334 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-05 23:06:20.267341 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.267348 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-05 23:06:20.267355 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:06:20.267361 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-05 23:06:20.267368 | orchestrator | 2025-07-05 23:06:20.267375 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-05 23:06:20.267382 | orchestrator | Saturday 05 July 2025 23:05:36 +0000 (0:00:03.059) 0:10:23.895 ********* 2025-07-05 23:06:20.267388 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-05 23:06:20.267395 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.267402 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-05 23:06:20.267409 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.267416 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-05 23:06:20.267422 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.267429 | orchestrator | 2025-07-05 23:06:20.267436 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-07-05 23:06:20.267447 | orchestrator | Saturday 05 July 2025 23:05:37 +0000 (0:00:01.182) 0:10:25.077 ********* 2025-07-05 23:06:20.267454 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-07-05 23:06:20.267461 | orchestrator | 2025-07-05 23:06:20.267467 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-07-05 23:06:20.267474 | orchestrator | Saturday 05 July 2025 23:05:37 +0000 (0:00:00.267) 0:10:25.345 ********* 2025-07-05 23:06:20.267485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-05 23:06:20.267492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-05 23:06:20.267503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-05 23:06:20.267510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-05 23:06:20.267517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-05 23:06:20.267523 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.267530 | orchestrator | 2025-07-05 23:06:20.267537 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-07-05 23:06:20.267544 | orchestrator | Saturday 05 July 2025 23:05:38 +0000 (0:00:00.892) 0:10:26.238 ********* 2025-07-05 23:06:20.267551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-05 23:06:20.267557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-05 23:06:20.267564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-05 23:06:20.267571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-05 23:06:20.267578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-05 23:06:20.267585 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.267592 | orchestrator | 2025-07-05 23:06:20.267599 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-07-05 23:06:20.267605 | orchestrator | Saturday 05 July 2025 23:05:39 +0000 (0:00:01.079) 0:10:27.318 ********* 2025-07-05 23:06:20.267612 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-05 23:06:20.267642 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-05 23:06:20.267656 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-05 23:06:20.267667 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-05 23:06:20.267678 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-05 23:06:20.267689 | orchestrator | 2025-07-05 23:06:20.267700 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-07-05 23:06:20.267710 | orchestrator | Saturday 05 July 2025 23:06:09 +0000 (0:00:29.224) 0:10:56.542 ********* 2025-07-05 23:06:20.267717 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.267724 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.267731 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.267738 | orchestrator | 2025-07-05 23:06:20.267744 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-07-05 23:06:20.267751 | orchestrator | Saturday 05 July 2025 23:06:09 +0000 (0:00:00.277) 0:10:56.820 ********* 2025-07-05 23:06:20.267758 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.267765 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.267777 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.267784 | orchestrator | 2025-07-05 23:06:20.267790 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-07-05 23:06:20.267797 | orchestrator | Saturday 05 July 2025 23:06:09 +0000 (0:00:00.268) 0:10:57.088 ********* 2025-07-05 23:06:20.267804 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.267811 | orchestrator | 2025-07-05 23:06:20.267817 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-07-05 23:06:20.267824 | orchestrator | Saturday 05 July 2025 23:06:10 +0000 (0:00:00.661) 0:10:57.749 ********* 2025-07-05 23:06:20.267831 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.267838 | orchestrator | 2025-07-05 23:06:20.267849 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-07-05 23:06:20.267856 | orchestrator | Saturday 05 July 2025 23:06:10 +0000 (0:00:00.478) 0:10:58.228 ********* 2025-07-05 23:06:20.267863 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.267870 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.267876 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.267883 | orchestrator | 2025-07-05 23:06:20.267890 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-07-05 23:06:20.267897 | orchestrator | Saturday 05 July 2025 23:06:11 +0000 (0:00:01.199) 0:10:59.427 ********* 2025-07-05 23:06:20.267904 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.267911 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.267917 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.267924 | orchestrator | 2025-07-05 23:06:20.267931 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-07-05 23:06:20.267943 | orchestrator | Saturday 05 July 2025 23:06:13 +0000 (0:00:01.297) 0:11:00.725 ********* 2025-07-05 23:06:20.267950 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:06:20.267956 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:06:20.267963 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:06:20.267970 | orchestrator | 2025-07-05 23:06:20.267977 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-07-05 23:06:20.267983 | orchestrator | Saturday 05 July 2025 23:06:14 +0000 (0:00:01.696) 0:11:02.421 ********* 2025-07-05 23:06:20.267990 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.267997 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.268004 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-05 23:06:20.268011 | orchestrator | 2025-07-05 23:06:20.268018 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-05 23:06:20.268025 | orchestrator | Saturday 05 July 2025 23:06:17 +0000 (0:00:02.385) 0:11:04.806 ********* 2025-07-05 23:06:20.268032 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.268039 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.268045 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.268052 | orchestrator | 2025-07-05 23:06:20.268059 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-05 23:06:20.268066 | orchestrator | Saturday 05 July 2025 23:06:17 +0000 (0:00:00.307) 0:11:05.113 ********* 2025-07-05 23:06:20.268073 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:06:20.268079 | orchestrator | 2025-07-05 23:06:20.268086 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-05 23:06:20.268093 | orchestrator | Saturday 05 July 2025 23:06:18 +0000 (0:00:00.454) 0:11:05.568 ********* 2025-07-05 23:06:20.268104 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.268111 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.268118 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.268125 | orchestrator | 2025-07-05 23:06:20.268131 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-05 23:06:20.268138 | orchestrator | Saturday 05 July 2025 23:06:18 +0000 (0:00:00.443) 0:11:06.012 ********* 2025-07-05 23:06:20.268145 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.268152 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:06:20.268159 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:06:20.268165 | orchestrator | 2025-07-05 23:06:20.268172 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-05 23:06:20.268179 | orchestrator | Saturday 05 July 2025 23:06:18 +0000 (0:00:00.291) 0:11:06.304 ********* 2025-07-05 23:06:20.268186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:06:20.268193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:06:20.268200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:06:20.268206 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:06:20.268213 | orchestrator | 2025-07-05 23:06:20.268220 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-05 23:06:20.268227 | orchestrator | Saturday 05 July 2025 23:06:19 +0000 (0:00:00.533) 0:11:06.837 ********* 2025-07-05 23:06:20.268234 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:06:20.268240 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:06:20.268247 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:06:20.268254 | orchestrator | 2025-07-05 23:06:20.268261 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:06:20.268268 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-07-05 23:06:20.268275 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-07-05 23:06:20.268282 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-07-05 23:06:20.268289 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-07-05 23:06:20.268296 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-07-05 23:06:20.268307 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-07-05 23:06:20.268314 | orchestrator | 2025-07-05 23:06:20.268321 | orchestrator | 2025-07-05 23:06:20.268328 | orchestrator | 2025-07-05 23:06:20.268335 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:06:20.268342 | orchestrator | Saturday 05 July 2025 23:06:19 +0000 (0:00:00.215) 0:11:07.053 ********* 2025-07-05 23:06:20.268349 | orchestrator | =============================================================================== 2025-07-05 23:06:20.268356 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 67.07s 2025-07-05 23:06:20.268362 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.01s 2025-07-05 23:06:20.268369 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.35s 2025-07-05 23:06:20.268380 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.22s 2025-07-05 23:06:20.268387 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.98s 2025-07-05 23:06:20.268393 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.78s 2025-07-05 23:06:20.268400 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.05s 2025-07-05 23:06:20.268417 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.57s 2025-07-05 23:06:20.268424 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.65s 2025-07-05 23:06:20.268431 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.83s 2025-07-05 23:06:20.268438 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.63s 2025-07-05 23:06:20.268444 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.04s 2025-07-05 23:06:20.268451 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.82s 2025-07-05 23:06:20.268458 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.29s 2025-07-05 23:06:20.268465 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.20s 2025-07-05 23:06:20.268471 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.13s 2025-07-05 23:06:20.268478 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.95s 2025-07-05 23:06:20.268485 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.45s 2025-07-05 23:06:20.268491 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.28s 2025-07-05 23:06:20.268498 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.17s 2025-07-05 23:06:20.268505 | orchestrator | 2025-07-05 23:06:20 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:20.268512 | orchestrator | 2025-07-05 23:06:20 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:23.276339 | orchestrator | 2025-07-05 23:06:23 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:23.277841 | orchestrator | 2025-07-05 23:06:23 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:23.279536 | orchestrator | 2025-07-05 23:06:23 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:23.279601 | orchestrator | 2025-07-05 23:06:23 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:26.316419 | orchestrator | 2025-07-05 23:06:26 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:26.318120 | orchestrator | 2025-07-05 23:06:26 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:26.319974 | orchestrator | 2025-07-05 23:06:26 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:26.320307 | orchestrator | 2025-07-05 23:06:26 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:29.369487 | orchestrator | 2025-07-05 23:06:29 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:29.371258 | orchestrator | 2025-07-05 23:06:29 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:29.373670 | orchestrator | 2025-07-05 23:06:29 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:29.373697 | orchestrator | 2025-07-05 23:06:29 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:32.411371 | orchestrator | 2025-07-05 23:06:32 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:32.412833 | orchestrator | 2025-07-05 23:06:32 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:32.414079 | orchestrator | 2025-07-05 23:06:32 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:32.414122 | orchestrator | 2025-07-05 23:06:32 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:35.459566 | orchestrator | 2025-07-05 23:06:35 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:35.460456 | orchestrator | 2025-07-05 23:06:35 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:35.461828 | orchestrator | 2025-07-05 23:06:35 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:35.461900 | orchestrator | 2025-07-05 23:06:35 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:38.506097 | orchestrator | 2025-07-05 23:06:38 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:38.508042 | orchestrator | 2025-07-05 23:06:38 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:38.510064 | orchestrator | 2025-07-05 23:06:38 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:38.510244 | orchestrator | 2025-07-05 23:06:38 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:41.546307 | orchestrator | 2025-07-05 23:06:41 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:41.547434 | orchestrator | 2025-07-05 23:06:41 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:41.548250 | orchestrator | 2025-07-05 23:06:41 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:41.548281 | orchestrator | 2025-07-05 23:06:41 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:44.586412 | orchestrator | 2025-07-05 23:06:44 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:44.586779 | orchestrator | 2025-07-05 23:06:44 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:44.590772 | orchestrator | 2025-07-05 23:06:44 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:44.590814 | orchestrator | 2025-07-05 23:06:44 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:47.634797 | orchestrator | 2025-07-05 23:06:47 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:47.636913 | orchestrator | 2025-07-05 23:06:47 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:47.637479 | orchestrator | 2025-07-05 23:06:47 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:47.637571 | orchestrator | 2025-07-05 23:06:47 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:50.685251 | orchestrator | 2025-07-05 23:06:50 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:50.686824 | orchestrator | 2025-07-05 23:06:50 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:50.689506 | orchestrator | 2025-07-05 23:06:50 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:50.689566 | orchestrator | 2025-07-05 23:06:50 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:53.729872 | orchestrator | 2025-07-05 23:06:53 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:53.731620 | orchestrator | 2025-07-05 23:06:53 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:53.732762 | orchestrator | 2025-07-05 23:06:53 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:53.732789 | orchestrator | 2025-07-05 23:06:53 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:56.773820 | orchestrator | 2025-07-05 23:06:56 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:56.775416 | orchestrator | 2025-07-05 23:06:56 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:56.777954 | orchestrator | 2025-07-05 23:06:56 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:56.777997 | orchestrator | 2025-07-05 23:06:56 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:06:59.820197 | orchestrator | 2025-07-05 23:06:59 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:06:59.821569 | orchestrator | 2025-07-05 23:06:59 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:06:59.823160 | orchestrator | 2025-07-05 23:06:59 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:06:59.823202 | orchestrator | 2025-07-05 23:06:59 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:02.870845 | orchestrator | 2025-07-05 23:07:02 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:02.871885 | orchestrator | 2025-07-05 23:07:02 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:07:02.873172 | orchestrator | 2025-07-05 23:07:02 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:07:02.873278 | orchestrator | 2025-07-05 23:07:02 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:05.915074 | orchestrator | 2025-07-05 23:07:05 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:05.917114 | orchestrator | 2025-07-05 23:07:05 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:07:05.918913 | orchestrator | 2025-07-05 23:07:05 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:07:05.918971 | orchestrator | 2025-07-05 23:07:05 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:08.961222 | orchestrator | 2025-07-05 23:07:08 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:08.962437 | orchestrator | 2025-07-05 23:07:08 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:07:08.963839 | orchestrator | 2025-07-05 23:07:08 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state STARTED 2025-07-05 23:07:08.964236 | orchestrator | 2025-07-05 23:07:08 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:12.003625 | orchestrator | 2025-07-05 23:07:12 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:12.005001 | orchestrator | 2025-07-05 23:07:12 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state STARTED 2025-07-05 23:07:12.007624 | orchestrator | 2025-07-05 23:07:12.007652 | orchestrator | 2025-07-05 23:07:12 | INFO  | Task 42fa5f8c-034e-49dc-a127-512ab30d0761 is in state SUCCESS 2025-07-05 23:07:12.009810 | orchestrator | 2025-07-05 23:07:12.009846 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:07:12.009859 | orchestrator | 2025-07-05 23:07:12.009871 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:07:12.009883 | orchestrator | Saturday 05 July 2025 23:04:07 +0000 (0:00:00.294) 0:00:00.294 ********* 2025-07-05 23:07:12.009894 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:12.009907 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:07:12.009918 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:07:12.009930 | orchestrator | 2025-07-05 23:07:12.009941 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:07:12.009952 | orchestrator | Saturday 05 July 2025 23:04:07 +0000 (0:00:00.307) 0:00:00.601 ********* 2025-07-05 23:07:12.009965 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-07-05 23:07:12.010008 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-07-05 23:07:12.010306 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-07-05 23:07:12.010322 | orchestrator | 2025-07-05 23:07:12.010334 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-07-05 23:07:12.010345 | orchestrator | 2025-07-05 23:07:12.010357 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-05 23:07:12.010369 | orchestrator | Saturday 05 July 2025 23:04:07 +0000 (0:00:00.503) 0:00:01.105 ********* 2025-07-05 23:07:12.010381 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:07:12.010393 | orchestrator | 2025-07-05 23:07:12.010404 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-07-05 23:07:12.010415 | orchestrator | Saturday 05 July 2025 23:04:08 +0000 (0:00:00.532) 0:00:01.638 ********* 2025-07-05 23:07:12.010426 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-05 23:07:12.010437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-05 23:07:12.010448 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-05 23:07:12.010459 | orchestrator | 2025-07-05 23:07:12.010470 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-07-05 23:07:12.010482 | orchestrator | Saturday 05 July 2025 23:04:09 +0000 (0:00:00.696) 0:00:02.334 ********* 2025-07-05 23:07:12.010497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.010532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.010557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.010585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.010600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.010620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.010633 | orchestrator | 2025-07-05 23:07:12.010645 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-05 23:07:12.010656 | orchestrator | Saturday 05 July 2025 23:04:10 +0000 (0:00:01.695) 0:00:04.030 ********* 2025-07-05 23:07:12.010667 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:07:12.010679 | orchestrator | 2025-07-05 23:07:12.010690 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-07-05 23:07:12.010701 | orchestrator | Saturday 05 July 2025 23:04:11 +0000 (0:00:00.526) 0:00:04.556 ********* 2025-07-05 23:07:12.010747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.010768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.010780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.010793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.010818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.010838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.010851 | orchestrator | 2025-07-05 23:07:12.010862 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-07-05 23:07:12.010875 | orchestrator | Saturday 05 July 2025 23:04:14 +0000 (0:00:02.653) 0:00:07.209 ********* 2025-07-05 23:07:12.010886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-05 23:07:12.010904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-05 23:07:12.010917 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:12.010929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-05 23:07:12.010956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-05 23:07:12.010970 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:12.010982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-05 23:07:12.010994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-05 23:07:12.011006 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:12.011017 | orchestrator | 2025-07-05 23:07:12.011028 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-07-05 23:07:12.011045 | orchestrator | Saturday 05 July 2025 23:04:15 +0000 (0:00:01.231) 0:00:08.441 ********* 2025-07-05 23:07:12.011057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-05 23:07:12.011085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-05 23:07:12.011099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-05 23:07:12.011111 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:12.011123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-05 23:07:12.011135 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:12.011152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-05 23:07:12.011179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-05 23:07:12.011192 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:12.011203 | orchestrator | 2025-07-05 23:07:12.011214 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-07-05 23:07:12.011226 | orchestrator | Saturday 05 July 2025 23:04:16 +0000 (0:00:00.897) 0:00:09.339 ********* 2025-07-05 23:07:12.011237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.011250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.011267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.011300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.011314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.011327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.011339 | orchestrator | 2025-07-05 23:07:12.011351 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-07-05 23:07:12.011363 | orchestrator | Saturday 05 July 2025 23:04:18 +0000 (0:00:02.403) 0:00:11.742 ********* 2025-07-05 23:07:12.011381 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:07:12.011392 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:12.011404 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:07:12.011415 | orchestrator | 2025-07-05 23:07:12.011426 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-07-05 23:07:12.011437 | orchestrator | Saturday 05 July 2025 23:04:22 +0000 (0:00:03.694) 0:00:15.436 ********* 2025-07-05 23:07:12.011448 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:12.011459 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:07:12.011470 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:07:12.011481 | orchestrator | 2025-07-05 23:07:12.011492 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-07-05 23:07:12.011508 | orchestrator | Saturday 05 July 2025 23:04:23 +0000 (0:00:01.653) 0:00:17.090 ********* 2025-07-05 23:07:12.011520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.011540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'con2025-07-05 23:07:12 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:12.011555 | orchestrator | tainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.011568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-05 23:07:12.011581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.011601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.011621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-05 23:07:12.011634 | orchestrator | 2025-07-05 23:07:12.011645 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-05 23:07:12.011657 | orchestrator | Saturday 05 July 2025 23:04:26 +0000 (0:00:02.490) 0:00:19.581 ********* 2025-07-05 23:07:12.011668 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:12.011679 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:12.011691 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:12.011702 | orchestrator | 2025-07-05 23:07:12.011748 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-05 23:07:12.011760 | orchestrator | Saturday 05 July 2025 23:04:26 +0000 (0:00:00.301) 0:00:19.883 ********* 2025-07-05 23:07:12.011771 | orchestrator | 2025-07-05 23:07:12.011782 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-05 23:07:12.011793 | orchestrator | Saturday 05 July 2025 23:04:26 +0000 (0:00:00.068) 0:00:19.951 ********* 2025-07-05 23:07:12.011804 | orchestrator | 2025-07-05 23:07:12.011815 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-05 23:07:12.011827 | orchestrator | Saturday 05 July 2025 23:04:26 +0000 (0:00:00.063) 0:00:20.014 ********* 2025-07-05 23:07:12.011838 | orchestrator | 2025-07-05 23:07:12.011849 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-07-05 23:07:12.011868 | orchestrator | Saturday 05 July 2025 23:04:27 +0000 (0:00:00.241) 0:00:20.256 ********* 2025-07-05 23:07:12.011880 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:12.011891 | orchestrator | 2025-07-05 23:07:12.011902 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-07-05 23:07:12.011913 | orchestrator | Saturday 05 July 2025 23:04:27 +0000 (0:00:00.197) 0:00:20.454 ********* 2025-07-05 23:07:12.011924 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:12.011935 | orchestrator | 2025-07-05 23:07:12.011946 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-07-05 23:07:12.011957 | orchestrator | Saturday 05 July 2025 23:04:27 +0000 (0:00:00.215) 0:00:20.669 ********* 2025-07-05 23:07:12.011968 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:12.011979 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:07:12.011990 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:07:12.012001 | orchestrator | 2025-07-05 23:07:12.012012 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-07-05 23:07:12.012024 | orchestrator | Saturday 05 July 2025 23:05:41 +0000 (0:01:14.341) 0:01:35.010 ********* 2025-07-05 23:07:12.012035 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:12.012046 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:07:12.012057 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:07:12.012069 | orchestrator | 2025-07-05 23:07:12.012080 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-05 23:07:12.012091 | orchestrator | Saturday 05 July 2025 23:06:59 +0000 (0:01:17.711) 0:02:52.722 ********* 2025-07-05 23:07:12.012102 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:07:12.012113 | orchestrator | 2025-07-05 23:07:12.012124 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-07-05 23:07:12.012136 | orchestrator | Saturday 05 July 2025 23:07:00 +0000 (0:00:00.565) 0:02:53.287 ********* 2025-07-05 23:07:12.012147 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:12.012157 | orchestrator | 2025-07-05 23:07:12.012169 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-07-05 23:07:12.012272 | orchestrator | Saturday 05 July 2025 23:07:02 +0000 (0:00:02.326) 0:02:55.613 ********* 2025-07-05 23:07:12.012299 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:12.012310 | orchestrator | 2025-07-05 23:07:12.012321 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-07-05 23:07:12.012333 | orchestrator | Saturday 05 July 2025 23:07:04 +0000 (0:00:02.215) 0:02:57.829 ********* 2025-07-05 23:07:12.012344 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:12.012355 | orchestrator | 2025-07-05 23:07:12.012366 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-07-05 23:07:12.012377 | orchestrator | Saturday 05 July 2025 23:07:07 +0000 (0:00:02.708) 0:03:00.538 ********* 2025-07-05 23:07:12.012388 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:12.012399 | orchestrator | 2025-07-05 23:07:12.012410 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:07:12.012421 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-05 23:07:12.012434 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-05 23:07:12.012454 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-05 23:07:12.012465 | orchestrator | 2025-07-05 23:07:12.012477 | orchestrator | 2025-07-05 23:07:12.012488 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:07:12.012499 | orchestrator | Saturday 05 July 2025 23:07:09 +0000 (0:00:02.195) 0:03:02.733 ********* 2025-07-05 23:07:12.012518 | orchestrator | =============================================================================== 2025-07-05 23:07:12.012529 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 77.71s 2025-07-05 23:07:12.012541 | orchestrator | opensearch : Restart opensearch container ------------------------------ 74.34s 2025-07-05 23:07:12.012552 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.69s 2025-07-05 23:07:12.012563 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.71s 2025-07-05 23:07:12.012574 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.65s 2025-07-05 23:07:12.012585 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.49s 2025-07-05 23:07:12.012596 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.40s 2025-07-05 23:07:12.012607 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.33s 2025-07-05 23:07:12.012618 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.22s 2025-07-05 23:07:12.012629 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.20s 2025-07-05 23:07:12.012640 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.70s 2025-07-05 23:07:12.012650 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.65s 2025-07-05 23:07:12.012662 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.23s 2025-07-05 23:07:12.012673 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.90s 2025-07-05 23:07:12.012684 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2025-07-05 23:07:12.012695 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-07-05 23:07:12.012752 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-07-05 23:07:12.012766 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-07-05 23:07:12.012777 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-07-05 23:07:12.012788 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.37s 2025-07-05 23:07:15.054340 | orchestrator | 2025-07-05 23:07:15 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:15.056632 | orchestrator | 2025-07-05 23:07:15 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:15.056859 | orchestrator | 2025-07-05 23:07:15 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:15.058856 | orchestrator | 2025-07-05 23:07:15 | INFO  | Task 85c1671a-9d3e-4d0c-a62b-29d6c3bf5bdf is in state SUCCESS 2025-07-05 23:07:15.060629 | orchestrator | 2025-07-05 23:07:15.060665 | orchestrator | 2025-07-05 23:07:15.060678 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-07-05 23:07:15.060692 | orchestrator | 2025-07-05 23:07:15.060703 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-05 23:07:15.060786 | orchestrator | Saturday 05 July 2025 23:04:06 +0000 (0:00:00.098) 0:00:00.098 ********* 2025-07-05 23:07:15.060801 | orchestrator | ok: [localhost] => { 2025-07-05 23:07:15.060815 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-07-05 23:07:15.060826 | orchestrator | } 2025-07-05 23:07:15.060838 | orchestrator | 2025-07-05 23:07:15.060849 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-07-05 23:07:15.060861 | orchestrator | Saturday 05 July 2025 23:04:06 +0000 (0:00:00.048) 0:00:00.146 ********* 2025-07-05 23:07:15.060872 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-07-05 23:07:15.060885 | orchestrator | ...ignoring 2025-07-05 23:07:15.060897 | orchestrator | 2025-07-05 23:07:15.060947 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-07-05 23:07:15.060959 | orchestrator | Saturday 05 July 2025 23:04:09 +0000 (0:00:02.983) 0:00:03.130 ********* 2025-07-05 23:07:15.060970 | orchestrator | skipping: [localhost] 2025-07-05 23:07:15.060981 | orchestrator | 2025-07-05 23:07:15.060992 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-07-05 23:07:15.061003 | orchestrator | Saturday 05 July 2025 23:04:09 +0000 (0:00:00.052) 0:00:03.182 ********* 2025-07-05 23:07:15.061014 | orchestrator | ok: [localhost] 2025-07-05 23:07:15.061025 | orchestrator | 2025-07-05 23:07:15.061036 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:07:15.061047 | orchestrator | 2025-07-05 23:07:15.061059 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:07:15.061071 | orchestrator | Saturday 05 July 2025 23:04:10 +0000 (0:00:00.170) 0:00:03.353 ********* 2025-07-05 23:07:15.061083 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.061094 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:07:15.061105 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:07:15.061116 | orchestrator | 2025-07-05 23:07:15.061127 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:07:15.061138 | orchestrator | Saturday 05 July 2025 23:04:10 +0000 (0:00:00.343) 0:00:03.697 ********* 2025-07-05 23:07:15.061149 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-05 23:07:15.061161 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-05 23:07:15.061172 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-05 23:07:15.061183 | orchestrator | 2025-07-05 23:07:15.061194 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-05 23:07:15.061205 | orchestrator | 2025-07-05 23:07:15.061216 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-05 23:07:15.061227 | orchestrator | Saturday 05 July 2025 23:04:11 +0000 (0:00:00.552) 0:00:04.250 ********* 2025-07-05 23:07:15.061240 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-05 23:07:15.061254 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-05 23:07:15.061267 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-05 23:07:15.061280 | orchestrator | 2025-07-05 23:07:15.061294 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-05 23:07:15.061307 | orchestrator | Saturday 05 July 2025 23:04:11 +0000 (0:00:00.389) 0:00:04.639 ********* 2025-07-05 23:07:15.061320 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:07:15.061334 | orchestrator | 2025-07-05 23:07:15.061346 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-07-05 23:07:15.061359 | orchestrator | Saturday 05 July 2025 23:04:12 +0000 (0:00:00.729) 0:00:05.369 ********* 2025-07-05 23:07:15.061396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-05 23:07:15.061429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-05 23:07:15.061445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-05 23:07:15.061467 | orchestrator | 2025-07-05 23:07:15.061486 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-07-05 23:07:15.061500 | orchestrator | Saturday 05 July 2025 23:04:15 +0000 (0:00:03.071) 0:00:08.441 ********* 2025-07-05 23:07:15.061513 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.061527 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.061540 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.061552 | orchestrator | 2025-07-05 23:07:15.061566 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-07-05 23:07:15.061579 | orchestrator | Saturday 05 July 2025 23:04:15 +0000 (0:00:00.715) 0:00:09.156 ********* 2025-07-05 23:07:15.061592 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.061603 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.061615 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.061626 | orchestrator | 2025-07-05 23:07:15.061637 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-07-05 23:07:15.061648 | orchestrator | Saturday 05 July 2025 23:04:17 +0000 (0:00:01.358) 0:00:10.514 ********* 2025-07-05 23:07:15.061665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-05 23:07:15.061685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-05 23:07:15.061710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-05 23:07:15.061759 | orchestrator | 2025-07-05 23:07:15.061780 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-07-05 23:07:15.061798 | orchestrator | Saturday 05 July 2025 23:04:21 +0000 (0:00:04.522) 0:00:15.037 ********* 2025-07-05 23:07:15.061813 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.061824 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.061835 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.061846 | orchestrator | 2025-07-05 23:07:15.061857 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-07-05 23:07:15.061868 | orchestrator | Saturday 05 July 2025 23:04:22 +0000 (0:00:01.160) 0:00:16.198 ********* 2025-07-05 23:07:15.061879 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.061890 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:07:15.061901 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:07:15.061912 | orchestrator | 2025-07-05 23:07:15.061923 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-05 23:07:15.061934 | orchestrator | Saturday 05 July 2025 23:04:27 +0000 (0:00:04.396) 0:00:20.595 ********* 2025-07-05 23:07:15.061945 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:07:15.061963 | orchestrator | 2025-07-05 23:07:15.061974 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-05 23:07:15.061985 | orchestrator | Saturday 05 July 2025 23:04:28 +0000 (0:00:00.678) 0:00:21.273 ********* 2025-07-05 23:07:15.062007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:07:15.062083 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.062116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:07:15.062150 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.062178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:07:15.062191 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.062202 | orchestrator | 2025-07-05 23:07:15.062213 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-05 23:07:15.062224 | orchestrator | Saturday 05 July 2025 23:04:32 +0000 (0:00:04.006) 0:00:25.279 ********* 2025-07-05 23:07:15.062241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:07:15.062274 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.062292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:07:15.062305 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.062321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:07:15.062333 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.062344 | orchestrator | 2025-07-05 23:07:15.062355 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-05 23:07:15.062366 | orchestrator | Saturday 05 July 2025 23:04:34 +0000 (0:00:02.826) 0:00:28.105 ********* 2025-07-05 23:07:15.062392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:07:15.062404 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.062430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:07:15.062443 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.062455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-05 23:07:15.062474 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.062485 | orchestrator | 2025-07-05 23:07:15.062496 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-07-05 23:07:15.062507 | orchestrator | Saturday 05 July 2025 23:04:37 +0000 (0:00:02.911) 0:00:31.017 ********* 2025-07-05 23:07:15.062532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-05 23:07:15.062545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-05 23:07:15.062577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-05 23:07:15.062591 | orchestrator | 2025-07-05 23:07:15.062602 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-07-05 23:07:15.062613 | orchestrator | Saturday 05 July 2025 23:04:41 +0000 (0:00:03.474) 0:00:34.492 ********* 2025-07-05 23:07:15.062624 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.062635 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:07:15.062646 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:07:15.062657 | orchestrator | 2025-07-05 23:07:15.062668 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-07-05 23:07:15.062679 | orchestrator | Saturday 05 July 2025 23:04:42 +0000 (0:00:01.096) 0:00:35.589 ********* 2025-07-05 23:07:15.062690 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.062707 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:07:15.062746 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:07:15.062758 | orchestrator | 2025-07-05 23:07:15.062769 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-07-05 23:07:15.062780 | orchestrator | Saturday 05 July 2025 23:04:42 +0000 (0:00:00.433) 0:00:36.022 ********* 2025-07-05 23:07:15.062791 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.062802 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:07:15.062813 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:07:15.062824 | orchestrator | 2025-07-05 23:07:15.062835 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-07-05 23:07:15.062846 | orchestrator | Saturday 05 July 2025 23:04:43 +0000 (0:00:00.443) 0:00:36.466 ********* 2025-07-05 23:07:15.062858 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-07-05 23:07:15.062869 | orchestrator | ...ignoring 2025-07-05 23:07:15.062881 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-07-05 23:07:15.062892 | orchestrator | ...ignoring 2025-07-05 23:07:15.062903 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-07-05 23:07:15.062914 | orchestrator | ...ignoring 2025-07-05 23:07:15.062925 | orchestrator | 2025-07-05 23:07:15.062936 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-07-05 23:07:15.062947 | orchestrator | Saturday 05 July 2025 23:04:54 +0000 (0:00:10.925) 0:00:47.391 ********* 2025-07-05 23:07:15.062958 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.062969 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:07:15.062980 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:07:15.062991 | orchestrator | 2025-07-05 23:07:15.063002 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-07-05 23:07:15.063013 | orchestrator | Saturday 05 July 2025 23:04:54 +0000 (0:00:00.619) 0:00:48.010 ********* 2025-07-05 23:07:15.063023 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.063034 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.063045 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.063056 | orchestrator | 2025-07-05 23:07:15.063067 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-07-05 23:07:15.063078 | orchestrator | Saturday 05 July 2025 23:04:55 +0000 (0:00:00.432) 0:00:48.443 ********* 2025-07-05 23:07:15.063089 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.063101 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.063112 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.063123 | orchestrator | 2025-07-05 23:07:15.063133 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-07-05 23:07:15.063145 | orchestrator | Saturday 05 July 2025 23:04:55 +0000 (0:00:00.433) 0:00:48.876 ********* 2025-07-05 23:07:15.063155 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.063166 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.063177 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.063188 | orchestrator | 2025-07-05 23:07:15.063199 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-07-05 23:07:15.063211 | orchestrator | Saturday 05 July 2025 23:04:56 +0000 (0:00:00.473) 0:00:49.350 ********* 2025-07-05 23:07:15.063221 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.063232 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:07:15.063243 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:07:15.063254 | orchestrator | 2025-07-05 23:07:15.063265 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-07-05 23:07:15.063276 | orchestrator | Saturday 05 July 2025 23:04:56 +0000 (0:00:00.638) 0:00:49.989 ********* 2025-07-05 23:07:15.063293 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.063311 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.063322 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.063333 | orchestrator | 2025-07-05 23:07:15.063344 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-05 23:07:15.063355 | orchestrator | Saturday 05 July 2025 23:04:57 +0000 (0:00:00.394) 0:00:50.383 ********* 2025-07-05 23:07:15.063366 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.063377 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.063388 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-07-05 23:07:15.063400 | orchestrator | 2025-07-05 23:07:15.063410 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-07-05 23:07:15.063421 | orchestrator | Saturday 05 July 2025 23:04:57 +0000 (0:00:00.397) 0:00:50.780 ********* 2025-07-05 23:07:15.063432 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.063443 | orchestrator | 2025-07-05 23:07:15.063455 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-07-05 23:07:15.063471 | orchestrator | Saturday 05 July 2025 23:05:07 +0000 (0:00:09.983) 0:01:00.764 ********* 2025-07-05 23:07:15.063482 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.063493 | orchestrator | 2025-07-05 23:07:15.063504 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-05 23:07:15.063516 | orchestrator | Saturday 05 July 2025 23:05:07 +0000 (0:00:00.121) 0:01:00.886 ********* 2025-07-05 23:07:15.063527 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.063537 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.063548 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.063559 | orchestrator | 2025-07-05 23:07:15.063570 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-07-05 23:07:15.063582 | orchestrator | Saturday 05 July 2025 23:05:08 +0000 (0:00:01.023) 0:01:01.909 ********* 2025-07-05 23:07:15.063593 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.063603 | orchestrator | 2025-07-05 23:07:15.063614 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-07-05 23:07:15.063626 | orchestrator | Saturday 05 July 2025 23:05:16 +0000 (0:00:07.682) 0:01:09.592 ********* 2025-07-05 23:07:15.063636 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.063648 | orchestrator | 2025-07-05 23:07:15.063658 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-07-05 23:07:15.063670 | orchestrator | Saturday 05 July 2025 23:05:17 +0000 (0:00:01.617) 0:01:11.210 ********* 2025-07-05 23:07:15.063681 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.063691 | orchestrator | 2025-07-05 23:07:15.063703 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-07-05 23:07:15.063738 | orchestrator | Saturday 05 July 2025 23:05:20 +0000 (0:00:02.535) 0:01:13.746 ********* 2025-07-05 23:07:15.063750 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.063761 | orchestrator | 2025-07-05 23:07:15.063772 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-07-05 23:07:15.063783 | orchestrator | Saturday 05 July 2025 23:05:20 +0000 (0:00:00.117) 0:01:13.863 ********* 2025-07-05 23:07:15.063794 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.063805 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.063816 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.063827 | orchestrator | 2025-07-05 23:07:15.063838 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-07-05 23:07:15.063849 | orchestrator | Saturday 05 July 2025 23:05:21 +0000 (0:00:00.517) 0:01:14.380 ********* 2025-07-05 23:07:15.063860 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.063871 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-05 23:07:15.063882 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:07:15.063893 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:07:15.063904 | orchestrator | 2025-07-05 23:07:15.063915 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-05 23:07:15.063932 | orchestrator | skipping: no hosts matched 2025-07-05 23:07:15.063943 | orchestrator | 2025-07-05 23:07:15.063954 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-05 23:07:15.063965 | orchestrator | 2025-07-05 23:07:15.063976 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-05 23:07:15.063987 | orchestrator | Saturday 05 July 2025 23:05:21 +0000 (0:00:00.337) 0:01:14.718 ********* 2025-07-05 23:07:15.063998 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:07:15.064009 | orchestrator | 2025-07-05 23:07:15.064020 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-05 23:07:15.064031 | orchestrator | Saturday 05 July 2025 23:05:43 +0000 (0:00:22.335) 0:01:37.054 ********* 2025-07-05 23:07:15.064042 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:07:15.064053 | orchestrator | 2025-07-05 23:07:15.064064 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-05 23:07:15.064075 | orchestrator | Saturday 05 July 2025 23:05:59 +0000 (0:00:15.574) 0:01:52.628 ********* 2025-07-05 23:07:15.064086 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:07:15.064097 | orchestrator | 2025-07-05 23:07:15.064108 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-05 23:07:15.064119 | orchestrator | 2025-07-05 23:07:15.064130 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-05 23:07:15.064141 | orchestrator | Saturday 05 July 2025 23:06:01 +0000 (0:00:02.578) 0:01:55.207 ********* 2025-07-05 23:07:15.064152 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:07:15.064163 | orchestrator | 2025-07-05 23:07:15.064174 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-05 23:07:15.064185 | orchestrator | Saturday 05 July 2025 23:06:20 +0000 (0:00:18.127) 0:02:13.334 ********* 2025-07-05 23:07:15.064196 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:07:15.064206 | orchestrator | 2025-07-05 23:07:15.064217 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-05 23:07:15.064228 | orchestrator | Saturday 05 July 2025 23:06:40 +0000 (0:00:20.675) 0:02:34.009 ********* 2025-07-05 23:07:15.064239 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:07:15.064250 | orchestrator | 2025-07-05 23:07:15.064261 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-05 23:07:15.064272 | orchestrator | 2025-07-05 23:07:15.064289 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-05 23:07:15.064301 | orchestrator | Saturday 05 July 2025 23:06:43 +0000 (0:00:02.354) 0:02:36.364 ********* 2025-07-05 23:07:15.064312 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.064323 | orchestrator | 2025-07-05 23:07:15.064334 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-05 23:07:15.064345 | orchestrator | Saturday 05 July 2025 23:06:53 +0000 (0:00:10.490) 0:02:46.855 ********* 2025-07-05 23:07:15.064356 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.064367 | orchestrator | 2025-07-05 23:07:15.064378 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-05 23:07:15.064389 | orchestrator | Saturday 05 July 2025 23:06:58 +0000 (0:00:04.557) 0:02:51.412 ********* 2025-07-05 23:07:15.064400 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.064411 | orchestrator | 2025-07-05 23:07:15.064422 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-05 23:07:15.064433 | orchestrator | 2025-07-05 23:07:15.064444 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-05 23:07:15.064460 | orchestrator | Saturday 05 July 2025 23:07:00 +0000 (0:00:02.142) 0:02:53.554 ********* 2025-07-05 23:07:15.064472 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:07:15.064483 | orchestrator | 2025-07-05 23:07:15.064494 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-07-05 23:07:15.064505 | orchestrator | Saturday 05 July 2025 23:07:00 +0000 (0:00:00.476) 0:02:54.031 ********* 2025-07-05 23:07:15.064527 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.064539 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.064550 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.064561 | orchestrator | 2025-07-05 23:07:15.064572 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-07-05 23:07:15.064583 | orchestrator | Saturday 05 July 2025 23:07:03 +0000 (0:00:02.309) 0:02:56.341 ********* 2025-07-05 23:07:15.064594 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.064605 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.064615 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.064626 | orchestrator | 2025-07-05 23:07:15.064637 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-07-05 23:07:15.064648 | orchestrator | Saturday 05 July 2025 23:07:05 +0000 (0:00:02.257) 0:02:58.599 ********* 2025-07-05 23:07:15.064659 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.064670 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.064681 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.064692 | orchestrator | 2025-07-05 23:07:15.064703 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-07-05 23:07:15.064814 | orchestrator | Saturday 05 July 2025 23:07:07 +0000 (0:00:02.253) 0:03:00.853 ********* 2025-07-05 23:07:15.064829 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.064840 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.064851 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:07:15.064862 | orchestrator | 2025-07-05 23:07:15.064873 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-07-05 23:07:15.064884 | orchestrator | Saturday 05 July 2025 23:07:09 +0000 (0:00:01.866) 0:03:02.719 ********* 2025-07-05 23:07:15.064895 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:07:15.064906 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:07:15.064917 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:07:15.064928 | orchestrator | 2025-07-05 23:07:15.064939 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-05 23:07:15.064950 | orchestrator | Saturday 05 July 2025 23:07:12 +0000 (0:00:02.604) 0:03:05.323 ********* 2025-07-05 23:07:15.064961 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:07:15.064972 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:07:15.064983 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:07:15.064994 | orchestrator | 2025-07-05 23:07:15.065005 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:07:15.065016 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-05 23:07:15.065027 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-07-05 23:07:15.065040 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-05 23:07:15.065052 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-05 23:07:15.065063 | orchestrator | 2025-07-05 23:07:15.065074 | orchestrator | 2025-07-05 23:07:15.065085 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:07:15.065096 | orchestrator | Saturday 05 July 2025 23:07:12 +0000 (0:00:00.221) 0:03:05.545 ********* 2025-07-05 23:07:15.065107 | orchestrator | =============================================================================== 2025-07-05 23:07:15.065118 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 40.46s 2025-07-05 23:07:15.065129 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.25s 2025-07-05 23:07:15.065140 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.93s 2025-07-05 23:07:15.065151 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.49s 2025-07-05 23:07:15.065169 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.98s 2025-07-05 23:07:15.065180 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.68s 2025-07-05 23:07:15.065198 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.93s 2025-07-05 23:07:15.065209 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.56s 2025-07-05 23:07:15.065220 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.52s 2025-07-05 23:07:15.065231 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.40s 2025-07-05 23:07:15.065242 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.01s 2025-07-05 23:07:15.065253 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.47s 2025-07-05 23:07:15.065264 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.07s 2025-07-05 23:07:15.065275 | orchestrator | Check MariaDB service --------------------------------------------------- 2.98s 2025-07-05 23:07:15.065292 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.91s 2025-07-05 23:07:15.065309 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.83s 2025-07-05 23:07:15.065326 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.60s 2025-07-05 23:07:15.065337 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.54s 2025-07-05 23:07:15.065346 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.31s 2025-07-05 23:07:15.065356 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.26s 2025-07-05 23:07:15.065366 | orchestrator | 2025-07-05 23:07:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:18.098481 | orchestrator | 2025-07-05 23:07:18 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:18.099681 | orchestrator | 2025-07-05 23:07:18 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:18.100976 | orchestrator | 2025-07-05 23:07:18 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:18.100998 | orchestrator | 2025-07-05 23:07:18 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:21.150000 | orchestrator | 2025-07-05 23:07:21 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:21.150225 | orchestrator | 2025-07-05 23:07:21 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:21.150241 | orchestrator | 2025-07-05 23:07:21 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:21.150253 | orchestrator | 2025-07-05 23:07:21 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:24.185833 | orchestrator | 2025-07-05 23:07:24 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:24.186393 | orchestrator | 2025-07-05 23:07:24 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:24.187345 | orchestrator | 2025-07-05 23:07:24 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:24.187574 | orchestrator | 2025-07-05 23:07:24 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:27.215409 | orchestrator | 2025-07-05 23:07:27 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:27.216213 | orchestrator | 2025-07-05 23:07:27 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:27.216850 | orchestrator | 2025-07-05 23:07:27 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:27.216915 | orchestrator | 2025-07-05 23:07:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:30.255608 | orchestrator | 2025-07-05 23:07:30 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:30.255709 | orchestrator | 2025-07-05 23:07:30 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:30.255725 | orchestrator | 2025-07-05 23:07:30 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:30.255790 | orchestrator | 2025-07-05 23:07:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:33.294093 | orchestrator | 2025-07-05 23:07:33 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:33.295924 | orchestrator | 2025-07-05 23:07:33 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:33.297136 | orchestrator | 2025-07-05 23:07:33 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:33.297186 | orchestrator | 2025-07-05 23:07:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:36.328110 | orchestrator | 2025-07-05 23:07:36 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:36.328230 | orchestrator | 2025-07-05 23:07:36 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:36.329642 | orchestrator | 2025-07-05 23:07:36 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:36.329671 | orchestrator | 2025-07-05 23:07:36 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:39.376143 | orchestrator | 2025-07-05 23:07:39 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:39.378475 | orchestrator | 2025-07-05 23:07:39 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:39.380685 | orchestrator | 2025-07-05 23:07:39 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:39.380727 | orchestrator | 2025-07-05 23:07:39 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:42.435515 | orchestrator | 2025-07-05 23:07:42 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:42.436996 | orchestrator | 2025-07-05 23:07:42 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:42.437024 | orchestrator | 2025-07-05 23:07:42 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:42.437381 | orchestrator | 2025-07-05 23:07:42 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:45.482985 | orchestrator | 2025-07-05 23:07:45 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:45.484658 | orchestrator | 2025-07-05 23:07:45 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:45.486781 | orchestrator | 2025-07-05 23:07:45 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:45.486820 | orchestrator | 2025-07-05 23:07:45 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:48.532627 | orchestrator | 2025-07-05 23:07:48 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:48.534400 | orchestrator | 2025-07-05 23:07:48 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:48.535982 | orchestrator | 2025-07-05 23:07:48 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:48.536008 | orchestrator | 2025-07-05 23:07:48 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:51.578191 | orchestrator | 2025-07-05 23:07:51 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:51.581050 | orchestrator | 2025-07-05 23:07:51 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:51.582458 | orchestrator | 2025-07-05 23:07:51 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:51.582523 | orchestrator | 2025-07-05 23:07:51 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:54.627215 | orchestrator | 2025-07-05 23:07:54 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:54.627311 | orchestrator | 2025-07-05 23:07:54 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:54.627324 | orchestrator | 2025-07-05 23:07:54 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:54.627335 | orchestrator | 2025-07-05 23:07:54 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:07:57.666411 | orchestrator | 2025-07-05 23:07:57 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:07:57.667802 | orchestrator | 2025-07-05 23:07:57 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:07:57.668975 | orchestrator | 2025-07-05 23:07:57 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:07:57.669017 | orchestrator | 2025-07-05 23:07:57 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:00.713642 | orchestrator | 2025-07-05 23:08:00 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:00.715269 | orchestrator | 2025-07-05 23:08:00 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:00.717991 | orchestrator | 2025-07-05 23:08:00 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:08:00.718126 | orchestrator | 2025-07-05 23:08:00 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:03.767199 | orchestrator | 2025-07-05 23:08:03 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:03.767303 | orchestrator | 2025-07-05 23:08:03 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:03.767840 | orchestrator | 2025-07-05 23:08:03 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:08:03.767873 | orchestrator | 2025-07-05 23:08:03 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:06.822622 | orchestrator | 2025-07-05 23:08:06 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:06.824782 | orchestrator | 2025-07-05 23:08:06 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:06.826805 | orchestrator | 2025-07-05 23:08:06 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:08:06.826991 | orchestrator | 2025-07-05 23:08:06 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:09.870697 | orchestrator | 2025-07-05 23:08:09 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:09.874213 | orchestrator | 2025-07-05 23:08:09 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:09.875369 | orchestrator | 2025-07-05 23:08:09 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:08:09.875420 | orchestrator | 2025-07-05 23:08:09 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:12.923985 | orchestrator | 2025-07-05 23:08:12 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:12.925886 | orchestrator | 2025-07-05 23:08:12 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:12.928117 | orchestrator | 2025-07-05 23:08:12 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:08:12.928174 | orchestrator | 2025-07-05 23:08:12 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:15.972357 | orchestrator | 2025-07-05 23:08:15 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:15.974156 | orchestrator | 2025-07-05 23:08:15 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:15.975867 | orchestrator | 2025-07-05 23:08:15 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:08:15.975908 | orchestrator | 2025-07-05 23:08:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:19.017851 | orchestrator | 2025-07-05 23:08:19 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:19.019998 | orchestrator | 2025-07-05 23:08:19 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:19.023049 | orchestrator | 2025-07-05 23:08:19 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:08:19.023079 | orchestrator | 2025-07-05 23:08:19 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:22.058579 | orchestrator | 2025-07-05 23:08:22 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:22.059937 | orchestrator | 2025-07-05 23:08:22 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:22.061512 | orchestrator | 2025-07-05 23:08:22 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:08:22.061536 | orchestrator | 2025-07-05 23:08:22 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:25.106572 | orchestrator | 2025-07-05 23:08:25 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:25.109469 | orchestrator | 2025-07-05 23:08:25 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:25.112635 | orchestrator | 2025-07-05 23:08:25 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state STARTED 2025-07-05 23:08:25.112683 | orchestrator | 2025-07-05 23:08:25 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:28.155597 | orchestrator | 2025-07-05 23:08:28 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:28.157267 | orchestrator | 2025-07-05 23:08:28 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:28.160934 | orchestrator | 2025-07-05 23:08:28 | INFO  | Task 9390c108-b77a-4e8e-a0cb-cf0655c5b145 is in state SUCCESS 2025-07-05 23:08:28.162829 | orchestrator | 2025-07-05 23:08:28.162868 | orchestrator | 2025-07-05 23:08:28.162881 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-07-05 23:08:28.162893 | orchestrator | 2025-07-05 23:08:28.162905 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-05 23:08:28.162924 | orchestrator | Saturday 05 July 2025 23:06:23 +0000 (0:00:00.533) 0:00:00.533 ********* 2025-07-05 23:08:28.162943 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:08:28.162962 | orchestrator | 2025-07-05 23:08:28.162981 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-05 23:08:28.162999 | orchestrator | Saturday 05 July 2025 23:06:24 +0000 (0:00:00.542) 0:00:01.075 ********* 2025-07-05 23:08:28.164132 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.164174 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.164192 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.164209 | orchestrator | 2025-07-05 23:08:28.164228 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-05 23:08:28.164247 | orchestrator | Saturday 05 July 2025 23:06:24 +0000 (0:00:00.542) 0:00:01.617 ********* 2025-07-05 23:08:28.164266 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.164282 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.164299 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.164316 | orchestrator | 2025-07-05 23:08:28.164334 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-05 23:08:28.164352 | orchestrator | Saturday 05 July 2025 23:06:25 +0000 (0:00:00.242) 0:00:01.859 ********* 2025-07-05 23:08:28.164390 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.164407 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.164423 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.164441 | orchestrator | 2025-07-05 23:08:28.164460 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-05 23:08:28.164478 | orchestrator | Saturday 05 July 2025 23:06:25 +0000 (0:00:00.634) 0:00:02.494 ********* 2025-07-05 23:08:28.164496 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.164515 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.164533 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.164552 | orchestrator | 2025-07-05 23:08:28.164570 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-05 23:08:28.164588 | orchestrator | Saturday 05 July 2025 23:06:26 +0000 (0:00:00.281) 0:00:02.776 ********* 2025-07-05 23:08:28.164607 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.164624 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.164642 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.164661 | orchestrator | 2025-07-05 23:08:28.164680 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-05 23:08:28.164698 | orchestrator | Saturday 05 July 2025 23:06:26 +0000 (0:00:00.252) 0:00:03.029 ********* 2025-07-05 23:08:28.164717 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.164737 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.164802 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.164817 | orchestrator | 2025-07-05 23:08:28.164830 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-05 23:08:28.164844 | orchestrator | Saturday 05 July 2025 23:06:26 +0000 (0:00:00.276) 0:00:03.306 ********* 2025-07-05 23:08:28.164857 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.164870 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.164883 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.164895 | orchestrator | 2025-07-05 23:08:28.164907 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-05 23:08:28.164920 | orchestrator | Saturday 05 July 2025 23:06:26 +0000 (0:00:00.378) 0:00:03.684 ********* 2025-07-05 23:08:28.164933 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.164946 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.164958 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.164971 | orchestrator | 2025-07-05 23:08:28.164983 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-05 23:08:28.164996 | orchestrator | Saturday 05 July 2025 23:06:27 +0000 (0:00:00.256) 0:00:03.940 ********* 2025-07-05 23:08:28.165009 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-05 23:08:28.165022 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-05 23:08:28.165034 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-05 23:08:28.165048 | orchestrator | 2025-07-05 23:08:28.165061 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-05 23:08:28.165074 | orchestrator | Saturday 05 July 2025 23:06:27 +0000 (0:00:00.561) 0:00:04.502 ********* 2025-07-05 23:08:28.165103 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.165114 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.165125 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.165136 | orchestrator | 2025-07-05 23:08:28.165147 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-05 23:08:28.165158 | orchestrator | Saturday 05 July 2025 23:06:28 +0000 (0:00:00.382) 0:00:04.885 ********* 2025-07-05 23:08:28.165169 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-05 23:08:28.165180 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-05 23:08:28.165191 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-05 23:08:28.165201 | orchestrator | 2025-07-05 23:08:28.165212 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-05 23:08:28.165223 | orchestrator | Saturday 05 July 2025 23:06:30 +0000 (0:00:01.993) 0:00:06.878 ********* 2025-07-05 23:08:28.165234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-05 23:08:28.165246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-05 23:08:28.165257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-05 23:08:28.165267 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.165278 | orchestrator | 2025-07-05 23:08:28.165289 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-05 23:08:28.165367 | orchestrator | Saturday 05 July 2025 23:06:30 +0000 (0:00:00.361) 0:00:07.239 ********* 2025-07-05 23:08:28.165384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.165399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.165411 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.165422 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.165434 | orchestrator | 2025-07-05 23:08:28.165445 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-05 23:08:28.165456 | orchestrator | Saturday 05 July 2025 23:06:31 +0000 (0:00:00.657) 0:00:07.896 ********* 2025-07-05 23:08:28.165478 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.165492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.165504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.165524 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.165535 | orchestrator | 2025-07-05 23:08:28.165546 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-05 23:08:28.165557 | orchestrator | Saturday 05 July 2025 23:06:31 +0000 (0:00:00.136) 0:00:08.033 ********* 2025-07-05 23:08:28.165571 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4c614e36319f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-05 23:06:28.768635', 'end': '2025-07-05 23:06:28.813241', 'delta': '0:00:00.044606', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4c614e36319f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-07-05 23:08:28.165586 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b4a33722bd06', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-05 23:06:29.444567', 'end': '2025-07-05 23:06:29.493184', 'delta': '0:00:00.048617', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4a33722bd06'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-07-05 23:08:28.165634 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd1323167fd5f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-05 23:06:29.971385', 'end': '2025-07-05 23:06:30.020858', 'delta': '0:00:00.049473', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1323167fd5f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-07-05 23:08:28.165648 | orchestrator | 2025-07-05 23:08:28.165660 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-05 23:08:28.165671 | orchestrator | Saturday 05 July 2025 23:06:31 +0000 (0:00:00.279) 0:00:08.313 ********* 2025-07-05 23:08:28.165682 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.165693 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.165704 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.165716 | orchestrator | 2025-07-05 23:08:28.165727 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-05 23:08:28.165738 | orchestrator | Saturday 05 July 2025 23:06:31 +0000 (0:00:00.387) 0:00:08.700 ********* 2025-07-05 23:08:28.165805 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-07-05 23:08:28.165821 | orchestrator | 2025-07-05 23:08:28.165832 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-05 23:08:28.165843 | orchestrator | Saturday 05 July 2025 23:06:33 +0000 (0:00:01.673) 0:00:10.374 ********* 2025-07-05 23:08:28.165854 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.165865 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.165876 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.165887 | orchestrator | 2025-07-05 23:08:28.165898 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-05 23:08:28.165909 | orchestrator | Saturday 05 July 2025 23:06:33 +0000 (0:00:00.264) 0:00:10.638 ********* 2025-07-05 23:08:28.165929 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.165940 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.165951 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.165962 | orchestrator | 2025-07-05 23:08:28.165973 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-05 23:08:28.165984 | orchestrator | Saturday 05 July 2025 23:06:34 +0000 (0:00:00.370) 0:00:11.008 ********* 2025-07-05 23:08:28.165995 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.166006 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.166070 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.166085 | orchestrator | 2025-07-05 23:08:28.166096 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-05 23:08:28.166108 | orchestrator | Saturday 05 July 2025 23:06:34 +0000 (0:00:00.393) 0:00:11.402 ********* 2025-07-05 23:08:28.166119 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.166130 | orchestrator | 2025-07-05 23:08:28.166141 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-05 23:08:28.166152 | orchestrator | Saturday 05 July 2025 23:06:34 +0000 (0:00:00.123) 0:00:11.525 ********* 2025-07-05 23:08:28.166163 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.166174 | orchestrator | 2025-07-05 23:08:28.166185 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-05 23:08:28.166201 | orchestrator | Saturday 05 July 2025 23:06:35 +0000 (0:00:00.208) 0:00:11.734 ********* 2025-07-05 23:08:28.166219 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.166236 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.166255 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.166274 | orchestrator | 2025-07-05 23:08:28.166294 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-05 23:08:28.166306 | orchestrator | Saturday 05 July 2025 23:06:35 +0000 (0:00:00.248) 0:00:11.983 ********* 2025-07-05 23:08:28.166317 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.166328 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.166339 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.166350 | orchestrator | 2025-07-05 23:08:28.166361 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-05 23:08:28.166372 | orchestrator | Saturday 05 July 2025 23:06:35 +0000 (0:00:00.264) 0:00:12.247 ********* 2025-07-05 23:08:28.166383 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.166394 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.166405 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.166416 | orchestrator | 2025-07-05 23:08:28.166427 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-05 23:08:28.166438 | orchestrator | Saturday 05 July 2025 23:06:35 +0000 (0:00:00.392) 0:00:12.639 ********* 2025-07-05 23:08:28.166449 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.166459 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.166470 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.166481 | orchestrator | 2025-07-05 23:08:28.166492 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-05 23:08:28.166503 | orchestrator | Saturday 05 July 2025 23:06:36 +0000 (0:00:00.288) 0:00:12.928 ********* 2025-07-05 23:08:28.166514 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.166525 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.166536 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.166547 | orchestrator | 2025-07-05 23:08:28.166558 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-05 23:08:28.166569 | orchestrator | Saturday 05 July 2025 23:06:36 +0000 (0:00:00.269) 0:00:13.197 ********* 2025-07-05 23:08:28.166580 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.166591 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.166602 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.166613 | orchestrator | 2025-07-05 23:08:28.166634 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-05 23:08:28.166687 | orchestrator | Saturday 05 July 2025 23:06:36 +0000 (0:00:00.279) 0:00:13.477 ********* 2025-07-05 23:08:28.166700 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.166711 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.166722 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.166733 | orchestrator | 2025-07-05 23:08:28.166744 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-05 23:08:28.166775 | orchestrator | Saturday 05 July 2025 23:06:37 +0000 (0:00:00.396) 0:00:13.873 ********* 2025-07-05 23:08:28.166788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8de564a6--401f--59e2--a445--234b3be175ce-osd--block--8de564a6--401f--59e2--a445--234b3be175ce', 'dm-uuid-LVM-yHtE4PzHBOsC3Ab6k4h6UunvVgRZUljOyF0P01Uq98ByQ0pAqL4fpNPtInc5P8ye'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2634d3d6--ac41--59e6--b3da--1ade7ee25156-osd--block--2634d3d6--ac41--59e6--b3da--1ade7ee25156', 'dm-uuid-LVM-HNqQQbtbb7sVYUTh3YcRap3mPYUyU3fkt8hNxUBSxMkyX8ntFOWb1kbhe9IQ0M1G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b5adb4f--945c--5107--b1d3--f691d6050e0c-osd--block--9b5adb4f--945c--5107--b1d3--f691d6050e0c', 'dm-uuid-LVM-hdeKPafsMg7oZmQmUUtbjbXxCeVfK4Fc7Tozb4P6FyGwhttWgPO1w0OMYdhIdyr0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.166979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24fdde66--e3ee--586c--8774--3b73abfeacc0-osd--block--24fdde66--e3ee--586c--8774--3b73abfeacc0', 'dm-uuid-LVM-6hql6u10Y3qbXznYPiK0N8Td6VOlEbXSl5ZAOlNelf3eImqtX1a6YOLzfgydvpBk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8de564a6--401f--59e2--a445--234b3be175ce-osd--block--8de564a6--401f--59e2--a445--234b3be175ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-l35b8k-4jZw-AVF9-dtCV-55lc-hBW0-OsqjEI', 'scsi-0QEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f', 'scsi-SQEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2634d3d6--ac41--59e6--b3da--1ade7ee25156-osd--block--2634d3d6--ac41--59e6--b3da--1ade7ee25156'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mYe279-QSeR-9Auk-vsEd-KS7m-Nuf3-zO0Nwb', 'scsi-0QEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11', 'scsi-SQEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7', 'scsi-SQEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167198 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.167221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part1', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part14', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part15', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part16', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9b5adb4f--945c--5107--b1d3--f691d6050e0c-osd--block--9b5adb4f--945c--5107--b1d3--f691d6050e0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z4HY1o-13oW-RsRu-9VSO-ZtBJ-IIbn-oPzaDr', 'scsi-0QEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123', 'scsi-SQEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--469f88b0--11f8--5147--93f6--bf0afec867dc-osd--block--469f88b0--11f8--5147--93f6--bf0afec867dc', 'dm-uuid-LVM-1H10CvQOUznXT9n1BnnWrsxAkkyW6SNNrdQn3ewRMzqpIFWJl0UCRwDdc8tONcGz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--24fdde66--e3ee--586c--8774--3b73abfeacc0-osd--block--24fdde66--e3ee--586c--8774--3b73abfeacc0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pvVztE-UAM6-eWNY-4WU1-mWtD-ELcv-vmv72r', 'scsi-0QEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc', 'scsi-SQEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2969909f--2c17--514e--91b3--dec9da8cf58e-osd--block--2969909f--2c17--514e--91b3--dec9da8cf58e', 'dm-uuid-LVM-3YjiS71PI2OLsXeqnuBee7SEF5kpub6Hb61eTc20ueFBC0T8aPCTXo1JWz2FzmIx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b', 'scsi-SQEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167413 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.167429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-05 23:08:28.167519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part1', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part14', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part15', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part16', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--469f88b0--11f8--5147--93f6--bf0afec867dc-osd--block--469f88b0--11f8--5147--93f6--bf0afec867dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MZ3H06-vyID-Ryiz-06Ik-f0Gf-PHsA-46Jjd9', 'scsi-0QEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871', 'scsi-SQEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2969909f--2c17--514e--91b3--dec9da8cf58e-osd--block--2969909f--2c17--514e--91b3--dec9da8cf58e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ehPJtv-O6yk-3p3B-PYU1-aGTG-Up2O-ffGFtu', 'scsi-0QEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c', 'scsi-SQEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c', 'scsi-SQEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-05 23:08:28.167593 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.167604 | orchestrator | 2025-07-05 23:08:28.167615 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-05 23:08:28.167626 | orchestrator | Saturday 05 July 2025 23:06:37 +0000 (0:00:00.522) 0:00:14.396 ********* 2025-07-05 23:08:28.167643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8de564a6--401f--59e2--a445--234b3be175ce-osd--block--8de564a6--401f--59e2--a445--234b3be175ce', 'dm-uuid-LVM-yHtE4PzHBOsC3Ab6k4h6UunvVgRZUljOyF0P01Uq98ByQ0pAqL4fpNPtInc5P8ye'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2634d3d6--ac41--59e6--b3da--1ade7ee25156-osd--block--2634d3d6--ac41--59e6--b3da--1ade7ee25156', 'dm-uuid-LVM-HNqQQbtbb7sVYUTh3YcRap3mPYUyU3fkt8hNxUBSxMkyX8ntFOWb1kbhe9IQ0M1G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167727 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167851 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b5adb4f--945c--5107--b1d3--f691d6050e0c-osd--block--9b5adb4f--945c--5107--b1d3--f691d6050e0c', 'dm-uuid-LVM-hdeKPafsMg7oZmQmUUtbjbXxCeVfK4Fc7Tozb4P6FyGwhttWgPO1w0OMYdhIdyr0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d0cd1c5-87e4-438c-b8ef-3a341283ec1c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167892 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24fdde66--e3ee--586c--8774--3b73abfeacc0-osd--block--24fdde66--e3ee--586c--8774--3b73abfeacc0', 'dm-uuid-LVM-6hql6u10Y3qbXznYPiK0N8Td6VOlEbXSl5ZAOlNelf3eImqtX1a6YOLzfgydvpBk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8de564a6--401f--59e2--a445--234b3be175ce-osd--block--8de564a6--401f--59e2--a445--234b3be175ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-l35b8k-4jZw-AVF9-dtCV-55lc-hBW0-OsqjEI', 'scsi-0QEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f', 'scsi-SQEMU_QEMU_HARDDISK_5326e027-1676-4a37-b778-dc441a5dd20f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2634d3d6--ac41--59e6--b3da--1ade7ee25156-osd--block--2634d3d6--ac41--59e6--b3da--1ade7ee25156'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mYe279-QSeR-9Auk-vsEd-KS7m-Nuf3-zO0Nwb', 'scsi-0QEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11', 'scsi-SQEMU_QEMU_HARDDISK_ed4648fa-96a1-4881-93bd-124d41734f11'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167957 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7', 'scsi-SQEMU_QEMU_HARDDISK_21be9c94-8d55-4d0c-8ee7-a63f66622af7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167987 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.167998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168015 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168025 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.168036 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168050 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168061 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168088 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--469f88b0--11f8--5147--93f6--bf0afec867dc-osd--block--469f88b0--11f8--5147--93f6--bf0afec867dc', 'dm-uuid-LVM-1H10CvQOUznXT9n1BnnWrsxAkkyW6SNNrdQn3ewRMzqpIFWJl0UCRwDdc8tONcGz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168105 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2969909f--2c17--514e--91b3--dec9da8cf58e-osd--block--2969909f--2c17--514e--91b3--dec9da8cf58e', 'dm-uuid-LVM-3YjiS71PI2OLsXeqnuBee7SEF5kpub6Hb61eTc20ueFBC0T8aPCTXo1JWz2FzmIx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168122 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part1', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part14', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part15', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part16', 'scsi-SQEMU_QEMU_HARDDISK_e374bb14-455a-4ad8-82c6-811b57be8189-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168139 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168150 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9b5adb4f--945c--5107--b1d3--f691d6050e0c-osd--block--9b5adb4f--945c--5107--b1d3--f691d6050e0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z4HY1o-13oW-RsRu-9VSO-ZtBJ-IIbn-oPzaDr', 'scsi-0QEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123', 'scsi-SQEMU_QEMU_HARDDISK_19122c33-f71f-45f9-9cf9-313728601123'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168166 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168182 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--24fdde66--e3ee--586c--8774--3b73abfeacc0-osd--block--24fdde66--e3ee--586c--8774--3b73abfeacc0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pvVztE-UAM6-eWNY-4WU1-mWtD-ELcv-vmv72r', 'scsi-0QEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc', 'scsi-SQEMU_QEMU_HARDDISK_04acd911-9b95-486d-a663-ed49966b13bc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168198 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168209 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b', 'scsi-SQEMU_QEMU_HARDDISK_b8c0761f-22b5-43a1-bf1b-76278e72919b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168219 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168245 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.168256 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168270 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168287 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168298 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168316 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part1', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part14', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part15', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part16', 'scsi-SQEMU_QEMU_HARDDISK_c08a8ddb-ede5-4204-aaab-cb049d6c9122-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168332 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--469f88b0--11f8--5147--93f6--bf0afec867dc-osd--block--469f88b0--11f8--5147--93f6--bf0afec867dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MZ3H06-vyID-Ryiz-06Ik-f0Gf-PHsA-46Jjd9', 'scsi-0QEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871', 'scsi-SQEMU_QEMU_HARDDISK_8a7d49ca-9238-4676-a846-742ace525871'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168348 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2969909f--2c17--514e--91b3--dec9da8cf58e-osd--block--2969909f--2c17--514e--91b3--dec9da8cf58e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ehPJtv-O6yk-3p3B-PYU1-aGTG-Up2O-ffGFtu', 'scsi-0QEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c', 'scsi-SQEMU_QEMU_HARDDISK_ba536110-d8e3-4c62-9758-5989affe708c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168359 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c', 'scsi-SQEMU_QEMU_HARDDISK_f21d976d-9ccd-433e-8515-86bf556b9e6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168377 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-05-22-13-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-05 23:08:28.168387 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.168397 | orchestrator | 2025-07-05 23:08:28.168408 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-05 23:08:28.168418 | orchestrator | Saturday 05 July 2025 23:06:38 +0000 (0:00:00.531) 0:00:14.927 ********* 2025-07-05 23:08:28.168428 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.168438 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.168448 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.168458 | orchestrator | 2025-07-05 23:08:28.168467 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-05 23:08:28.168483 | orchestrator | Saturday 05 July 2025 23:06:38 +0000 (0:00:00.648) 0:00:15.575 ********* 2025-07-05 23:08:28.168493 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.168503 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.168512 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.168522 | orchestrator | 2025-07-05 23:08:28.168532 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-05 23:08:28.168542 | orchestrator | Saturday 05 July 2025 23:06:39 +0000 (0:00:00.379) 0:00:15.954 ********* 2025-07-05 23:08:28.168552 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.168566 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.168576 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.168586 | orchestrator | 2025-07-05 23:08:28.168596 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-05 23:08:28.168606 | orchestrator | Saturday 05 July 2025 23:06:39 +0000 (0:00:00.630) 0:00:16.585 ********* 2025-07-05 23:08:28.168616 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.168626 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.168636 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.168645 | orchestrator | 2025-07-05 23:08:28.168655 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-05 23:08:28.168665 | orchestrator | Saturday 05 July 2025 23:06:40 +0000 (0:00:00.262) 0:00:16.848 ********* 2025-07-05 23:08:28.168675 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.168685 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.168695 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.168704 | orchestrator | 2025-07-05 23:08:28.168714 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-05 23:08:28.168724 | orchestrator | Saturday 05 July 2025 23:06:40 +0000 (0:00:00.364) 0:00:17.213 ********* 2025-07-05 23:08:28.168734 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.168744 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.168769 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.168779 | orchestrator | 2025-07-05 23:08:28.168789 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-05 23:08:28.168799 | orchestrator | Saturday 05 July 2025 23:06:40 +0000 (0:00:00.424) 0:00:17.638 ********* 2025-07-05 23:08:28.168809 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-05 23:08:28.168819 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-05 23:08:28.168829 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-05 23:08:28.168839 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-05 23:08:28.168848 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-05 23:08:28.168858 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-05 23:08:28.168868 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-05 23:08:28.168878 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-05 23:08:28.168887 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-05 23:08:28.168897 | orchestrator | 2025-07-05 23:08:28.168907 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-05 23:08:28.168917 | orchestrator | Saturday 05 July 2025 23:06:41 +0000 (0:00:00.785) 0:00:18.423 ********* 2025-07-05 23:08:28.168927 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-05 23:08:28.168937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-05 23:08:28.168947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-05 23:08:28.168956 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.168966 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-05 23:08:28.168976 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-05 23:08:28.168986 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-05 23:08:28.168995 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.169005 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-05 23:08:28.169021 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-05 23:08:28.169031 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-05 23:08:28.169040 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.169050 | orchestrator | 2025-07-05 23:08:28.169060 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-05 23:08:28.169070 | orchestrator | Saturday 05 July 2025 23:06:42 +0000 (0:00:00.308) 0:00:18.731 ********* 2025-07-05 23:08:28.169080 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:08:28.169090 | orchestrator | 2025-07-05 23:08:28.169100 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-05 23:08:28.169110 | orchestrator | Saturday 05 July 2025 23:06:42 +0000 (0:00:00.581) 0:00:19.313 ********* 2025-07-05 23:08:28.169120 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.169130 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.169140 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.169150 | orchestrator | 2025-07-05 23:08:28.169164 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-05 23:08:28.169175 | orchestrator | Saturday 05 July 2025 23:06:42 +0000 (0:00:00.284) 0:00:19.597 ********* 2025-07-05 23:08:28.169184 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.169194 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.169204 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.169214 | orchestrator | 2025-07-05 23:08:28.169224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-05 23:08:28.169234 | orchestrator | Saturday 05 July 2025 23:06:43 +0000 (0:00:00.276) 0:00:19.874 ********* 2025-07-05 23:08:28.169243 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.169253 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.169263 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:08:28.169273 | orchestrator | 2025-07-05 23:08:28.169282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-05 23:08:28.169292 | orchestrator | Saturday 05 July 2025 23:06:43 +0000 (0:00:00.268) 0:00:20.143 ********* 2025-07-05 23:08:28.169302 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.169312 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.169321 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.169331 | orchestrator | 2025-07-05 23:08:28.169341 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-05 23:08:28.169351 | orchestrator | Saturday 05 July 2025 23:06:43 +0000 (0:00:00.478) 0:00:20.621 ********* 2025-07-05 23:08:28.169361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:08:28.169375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:08:28.169385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:08:28.169394 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.169404 | orchestrator | 2025-07-05 23:08:28.169414 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-05 23:08:28.169424 | orchestrator | Saturday 05 July 2025 23:06:44 +0000 (0:00:00.331) 0:00:20.952 ********* 2025-07-05 23:08:28.169434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:08:28.169443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:08:28.169453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:08:28.169463 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.169473 | orchestrator | 2025-07-05 23:08:28.169483 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-05 23:08:28.169493 | orchestrator | Saturday 05 July 2025 23:06:44 +0000 (0:00:00.333) 0:00:21.286 ********* 2025-07-05 23:08:28.169502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-05 23:08:28.169521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-05 23:08:28.169530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-05 23:08:28.169540 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.169550 | orchestrator | 2025-07-05 23:08:28.169560 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-05 23:08:28.169570 | orchestrator | Saturday 05 July 2025 23:06:44 +0000 (0:00:00.332) 0:00:21.618 ********* 2025-07-05 23:08:28.169580 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:08:28.169589 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:08:28.169599 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:08:28.169609 | orchestrator | 2025-07-05 23:08:28.169619 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-05 23:08:28.169629 | orchestrator | Saturday 05 July 2025 23:06:45 +0000 (0:00:00.296) 0:00:21.915 ********* 2025-07-05 23:08:28.169639 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-05 23:08:28.169648 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-05 23:08:28.169658 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-05 23:08:28.169668 | orchestrator | 2025-07-05 23:08:28.169678 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-05 23:08:28.169688 | orchestrator | Saturday 05 July 2025 23:06:45 +0000 (0:00:00.439) 0:00:22.354 ********* 2025-07-05 23:08:28.169697 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-05 23:08:28.169707 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-05 23:08:28.169717 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-05 23:08:28.169727 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-05 23:08:28.169737 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-05 23:08:28.169746 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-05 23:08:28.169773 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-05 23:08:28.169783 | orchestrator | 2025-07-05 23:08:28.169793 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-05 23:08:28.169802 | orchestrator | Saturday 05 July 2025 23:06:46 +0000 (0:00:00.799) 0:00:23.154 ********* 2025-07-05 23:08:28.169812 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-05 23:08:28.169822 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-05 23:08:28.169832 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-05 23:08:28.169842 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-05 23:08:28.169852 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-05 23:08:28.169861 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-05 23:08:28.169871 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-05 23:08:28.169881 | orchestrator | 2025-07-05 23:08:28.169895 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-07-05 23:08:28.169905 | orchestrator | Saturday 05 July 2025 23:06:48 +0000 (0:00:01.604) 0:00:24.758 ********* 2025-07-05 23:08:28.169915 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:08:28.169925 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:08:28.169935 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-07-05 23:08:28.169945 | orchestrator | 2025-07-05 23:08:28.169954 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-07-05 23:08:28.169964 | orchestrator | Saturday 05 July 2025 23:06:48 +0000 (0:00:00.317) 0:00:25.075 ********* 2025-07-05 23:08:28.169975 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-05 23:08:28.169991 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-05 23:08:28.170006 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-05 23:08:28.170041 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-05 23:08:28.170054 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-05 23:08:28.170064 | orchestrator | 2025-07-05 23:08:28.170074 | orchestrator | TASK [generate keys] *********************************************************** 2025-07-05 23:08:28.170084 | orchestrator | Saturday 05 July 2025 23:07:32 +0000 (0:00:43.885) 0:01:08.961 ********* 2025-07-05 23:08:28.170093 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170103 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170113 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170123 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170132 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170142 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170152 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-07-05 23:08:28.170161 | orchestrator | 2025-07-05 23:08:28.170171 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-07-05 23:08:28.170181 | orchestrator | Saturday 05 July 2025 23:07:56 +0000 (0:00:23.917) 0:01:32.878 ********* 2025-07-05 23:08:28.170191 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170200 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170210 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170220 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170230 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170239 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170249 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-05 23:08:28.170259 | orchestrator | 2025-07-05 23:08:28.170268 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-07-05 23:08:28.170278 | orchestrator | Saturday 05 July 2025 23:08:08 +0000 (0:00:12.161) 0:01:45.040 ********* 2025-07-05 23:08:28.170288 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170298 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-05 23:08:28.170308 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-05 23:08:28.170324 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170334 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-05 23:08:28.170344 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-05 23:08:28.170358 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170368 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-05 23:08:28.170378 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-05 23:08:28.170388 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170398 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-05 23:08:28.170407 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-05 23:08:28.170417 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170427 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-05 23:08:28.170436 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-05 23:08:28.170446 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-05 23:08:28.170456 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-05 23:08:28.170465 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-05 23:08:28.170475 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-07-05 23:08:28.170485 | orchestrator | 2025-07-05 23:08:28.170499 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:08:28.170510 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-07-05 23:08:28.170520 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-05 23:08:28.170530 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-05 23:08:28.170540 | orchestrator | 2025-07-05 23:08:28.170550 | orchestrator | 2025-07-05 23:08:28.170560 | orchestrator | 2025-07-05 23:08:28.170570 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:08:28.170579 | orchestrator | Saturday 05 July 2025 23:08:25 +0000 (0:00:17.527) 0:02:02.567 ********* 2025-07-05 23:08:28.170589 | orchestrator | =============================================================================== 2025-07-05 23:08:28.170599 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.89s 2025-07-05 23:08:28.170608 | orchestrator | generate keys ---------------------------------------------------------- 23.92s 2025-07-05 23:08:28.170618 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.53s 2025-07-05 23:08:28.170628 | orchestrator | get keys from monitors ------------------------------------------------- 12.16s 2025-07-05 23:08:28.170637 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.99s 2025-07-05 23:08:28.170647 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.67s 2025-07-05 23:08:28.170656 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.60s 2025-07-05 23:08:28.170666 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.80s 2025-07-05 23:08:28.170676 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.79s 2025-07-05 23:08:28.170685 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.66s 2025-07-05 23:08:28.170695 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.65s 2025-07-05 23:08:28.170710 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.63s 2025-07-05 23:08:28.170720 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2025-07-05 23:08:28.170730 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.58s 2025-07-05 23:08:28.170740 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.56s 2025-07-05 23:08:28.170749 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.54s 2025-07-05 23:08:28.170778 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.54s 2025-07-05 23:08:28.170788 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.53s 2025-07-05 23:08:28.170798 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.52s 2025-07-05 23:08:28.170807 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.48s 2025-07-05 23:08:28.170817 | orchestrator | 2025-07-05 23:08:28 | INFO  | Task 8c935824-fc39-47eb-8a65-ae19c916d7da is in state STARTED 2025-07-05 23:08:28.170827 | orchestrator | 2025-07-05 23:08:28 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:31.202575 | orchestrator | 2025-07-05 23:08:31 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:31.204998 | orchestrator | 2025-07-05 23:08:31 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:31.206643 | orchestrator | 2025-07-05 23:08:31 | INFO  | Task 8c935824-fc39-47eb-8a65-ae19c916d7da is in state STARTED 2025-07-05 23:08:31.206680 | orchestrator | 2025-07-05 23:08:31 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:34.249447 | orchestrator | 2025-07-05 23:08:34 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:34.251514 | orchestrator | 2025-07-05 23:08:34 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:34.251627 | orchestrator | 2025-07-05 23:08:34 | INFO  | Task 8c935824-fc39-47eb-8a65-ae19c916d7da is in state STARTED 2025-07-05 23:08:34.252159 | orchestrator | 2025-07-05 23:08:34 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:37.303919 | orchestrator | 2025-07-05 23:08:37 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:37.304506 | orchestrator | 2025-07-05 23:08:37 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:37.305672 | orchestrator | 2025-07-05 23:08:37 | INFO  | Task 8c935824-fc39-47eb-8a65-ae19c916d7da is in state STARTED 2025-07-05 23:08:37.305688 | orchestrator | 2025-07-05 23:08:37 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:40.357057 | orchestrator | 2025-07-05 23:08:40 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:40.359200 | orchestrator | 2025-07-05 23:08:40 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:40.362240 | orchestrator | 2025-07-05 23:08:40 | INFO  | Task 8c935824-fc39-47eb-8a65-ae19c916d7da is in state STARTED 2025-07-05 23:08:40.362358 | orchestrator | 2025-07-05 23:08:40 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:43.414220 | orchestrator | 2025-07-05 23:08:43 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:43.415948 | orchestrator | 2025-07-05 23:08:43 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:43.418101 | orchestrator | 2025-07-05 23:08:43 | INFO  | Task 8c935824-fc39-47eb-8a65-ae19c916d7da is in state STARTED 2025-07-05 23:08:43.418414 | orchestrator | 2025-07-05 23:08:43 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:46.468328 | orchestrator | 2025-07-05 23:08:46 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:46.470984 | orchestrator | 2025-07-05 23:08:46 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:46.472246 | orchestrator | 2025-07-05 23:08:46 | INFO  | Task 8c935824-fc39-47eb-8a65-ae19c916d7da is in state STARTED 2025-07-05 23:08:46.472274 | orchestrator | 2025-07-05 23:08:46 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:49.518876 | orchestrator | 2025-07-05 23:08:49 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:49.520902 | orchestrator | 2025-07-05 23:08:49 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:49.522061 | orchestrator | 2025-07-05 23:08:49 | INFO  | Task 8c935824-fc39-47eb-8a65-ae19c916d7da is in state STARTED 2025-07-05 23:08:49.522194 | orchestrator | 2025-07-05 23:08:49 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:52.564793 | orchestrator | 2025-07-05 23:08:52 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:52.566579 | orchestrator | 2025-07-05 23:08:52 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:52.567835 | orchestrator | 2025-07-05 23:08:52 | INFO  | Task 8c935824-fc39-47eb-8a65-ae19c916d7da is in state STARTED 2025-07-05 23:08:52.567863 | orchestrator | 2025-07-05 23:08:52 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:55.603975 | orchestrator | 2025-07-05 23:08:55 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:55.606157 | orchestrator | 2025-07-05 23:08:55 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state STARTED 2025-07-05 23:08:55.607966 | orchestrator | 2025-07-05 23:08:55 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:08:55.609574 | orchestrator | 2025-07-05 23:08:55 | INFO  | Task 8c935824-fc39-47eb-8a65-ae19c916d7da is in state SUCCESS 2025-07-05 23:08:55.609642 | orchestrator | 2025-07-05 23:08:55 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:08:58.657489 | orchestrator | 2025-07-05 23:08:58 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:08:58.662581 | orchestrator | 2025-07-05 23:08:58.662630 | orchestrator | 2025-07-05 23:08:58.662644 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-07-05 23:08:58.662673 | orchestrator | 2025-07-05 23:08:58.662685 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-07-05 23:08:58.662698 | orchestrator | Saturday 05 July 2025 23:08:29 +0000 (0:00:00.155) 0:00:00.155 ********* 2025-07-05 23:08:58.662709 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-07-05 23:08:58.662722 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-05 23:08:58.662734 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-05 23:08:58.662745 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-07-05 23:08:58.662756 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-05 23:08:58.662789 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-07-05 23:08:58.662801 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-07-05 23:08:58.662812 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-07-05 23:08:58.662848 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-07-05 23:08:58.662860 | orchestrator | 2025-07-05 23:08:58.662885 | orchestrator | TASK [Create share directory] ************************************************** 2025-07-05 23:08:58.662897 | orchestrator | Saturday 05 July 2025 23:08:34 +0000 (0:00:04.167) 0:00:04.322 ********* 2025-07-05 23:08:58.662910 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-05 23:08:58.662922 | orchestrator | 2025-07-05 23:08:58.662934 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-07-05 23:08:58.662945 | orchestrator | Saturday 05 July 2025 23:08:35 +0000 (0:00:00.992) 0:00:05.315 ********* 2025-07-05 23:08:58.662957 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-07-05 23:08:58.662969 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-05 23:08:58.662981 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-05 23:08:58.662993 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-07-05 23:08:58.663004 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-05 23:08:58.663016 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-07-05 23:08:58.663027 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-07-05 23:08:58.663039 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-07-05 23:08:58.663051 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-07-05 23:08:58.663062 | orchestrator | 2025-07-05 23:08:58.663074 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-07-05 23:08:58.663085 | orchestrator | Saturday 05 July 2025 23:08:47 +0000 (0:00:12.871) 0:00:18.187 ********* 2025-07-05 23:08:58.663098 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-07-05 23:08:58.663109 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-05 23:08:58.663122 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-05 23:08:58.663134 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-07-05 23:08:58.663145 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-05 23:08:58.663157 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-07-05 23:08:58.663169 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-07-05 23:08:58.663182 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-07-05 23:08:58.663195 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-07-05 23:08:58.663256 | orchestrator | 2025-07-05 23:08:58.663365 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:08:58.663382 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:08:58.663395 | orchestrator | 2025-07-05 23:08:58.663406 | orchestrator | 2025-07-05 23:08:58.663417 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:08:58.663429 | orchestrator | Saturday 05 July 2025 23:08:54 +0000 (0:00:06.384) 0:00:24.571 ********* 2025-07-05 23:08:58.663440 | orchestrator | =============================================================================== 2025-07-05 23:08:58.663450 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.87s 2025-07-05 23:08:58.663462 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.38s 2025-07-05 23:08:58.663472 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.17s 2025-07-05 23:08:58.663483 | orchestrator | Create share directory -------------------------------------------------- 0.99s 2025-07-05 23:08:58.663504 | orchestrator | 2025-07-05 23:08:58.663515 | orchestrator | 2025-07-05 23:08:58.663527 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:08:58.663672 | orchestrator | 2025-07-05 23:08:58.663701 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:08:58.663713 | orchestrator | Saturday 05 July 2025 23:07:16 +0000 (0:00:00.232) 0:00:00.232 ********* 2025-07-05 23:08:58.663724 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.663736 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.663747 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.663758 | orchestrator | 2025-07-05 23:08:58.663792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:08:58.663803 | orchestrator | Saturday 05 July 2025 23:07:16 +0000 (0:00:00.266) 0:00:00.498 ********* 2025-07-05 23:08:58.663814 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-07-05 23:08:58.663826 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-07-05 23:08:58.663837 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-07-05 23:08:58.663848 | orchestrator | 2025-07-05 23:08:58.663859 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-07-05 23:08:58.663871 | orchestrator | 2025-07-05 23:08:58.663882 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-05 23:08:58.663893 | orchestrator | Saturday 05 July 2025 23:07:16 +0000 (0:00:00.341) 0:00:00.840 ********* 2025-07-05 23:08:58.663904 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:08:58.663915 | orchestrator | 2025-07-05 23:08:58.663927 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-07-05 23:08:58.663938 | orchestrator | Saturday 05 July 2025 23:07:17 +0000 (0:00:00.440) 0:00:01.280 ********* 2025-07-05 23:08:58.663963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:08:58.664010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:08:58.664026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:08:58.664046 | orchestrator | 2025-07-05 23:08:58.664057 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-07-05 23:08:58.664069 | orchestrator | Saturday 05 July 2025 23:07:18 +0000 (0:00:01.053) 0:00:02.333 ********* 2025-07-05 23:08:58.664080 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.664092 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.664118 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.664130 | orchestrator | 2025-07-05 23:08:58.664141 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-05 23:08:58.664152 | orchestrator | Saturday 05 July 2025 23:07:18 +0000 (0:00:00.382) 0:00:02.716 ********* 2025-07-05 23:08:58.664163 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-05 23:08:58.664174 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-05 23:08:58.664191 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-07-05 23:08:58.664203 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-07-05 23:08:58.664214 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-07-05 23:08:58.664225 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-07-05 23:08:58.664236 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-07-05 23:08:58.664246 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-07-05 23:08:58.664257 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-05 23:08:58.664268 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-05 23:08:58.664279 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-07-05 23:08:58.664290 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-07-05 23:08:58.664301 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-07-05 23:08:58.664314 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-07-05 23:08:58.664326 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-07-05 23:08:58.664339 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-07-05 23:08:58.664356 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-05 23:08:58.664368 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-05 23:08:58.664381 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-07-05 23:08:58.664393 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-07-05 23:08:58.664406 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-07-05 23:08:58.664418 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-07-05 23:08:58.664431 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-07-05 23:08:58.664443 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-07-05 23:08:58.664456 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-07-05 23:08:58.664471 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-07-05 23:08:58.664491 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-07-05 23:08:58.664504 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-07-05 23:08:58.664518 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-07-05 23:08:58.664530 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-07-05 23:08:58.664543 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-07-05 23:08:58.664556 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-07-05 23:08:58.664568 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-07-05 23:08:58.664581 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-07-05 23:08:58.664593 | orchestrator | 2025-07-05 23:08:58.664606 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-05 23:08:58.664618 | orchestrator | Saturday 05 July 2025 23:07:19 +0000 (0:00:00.702) 0:00:03.419 ********* 2025-07-05 23:08:58.664631 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.664644 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.664656 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.664668 | orchestrator | 2025-07-05 23:08:58.664679 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-05 23:08:58.664690 | orchestrator | Saturday 05 July 2025 23:07:19 +0000 (0:00:00.271) 0:00:03.690 ********* 2025-07-05 23:08:58.664701 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.664712 | orchestrator | 2025-07-05 23:08:58.664723 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-05 23:08:58.664740 | orchestrator | Saturday 05 July 2025 23:07:19 +0000 (0:00:00.123) 0:00:03.813 ********* 2025-07-05 23:08:58.664751 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.664763 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.664834 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.664845 | orchestrator | 2025-07-05 23:08:58.664856 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-05 23:08:58.664867 | orchestrator | Saturday 05 July 2025 23:07:20 +0000 (0:00:00.399) 0:00:04.213 ********* 2025-07-05 23:08:58.664878 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.664889 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.664901 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.664912 | orchestrator | 2025-07-05 23:08:58.664922 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-05 23:08:58.664947 | orchestrator | Saturday 05 July 2025 23:07:20 +0000 (0:00:00.265) 0:00:04.478 ********* 2025-07-05 23:08:58.664969 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.664980 | orchestrator | 2025-07-05 23:08:58.664991 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-05 23:08:58.665002 | orchestrator | Saturday 05 July 2025 23:07:20 +0000 (0:00:00.115) 0:00:04.594 ********* 2025-07-05 23:08:58.665013 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665024 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.665035 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.665046 | orchestrator | 2025-07-05 23:08:58.665057 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-05 23:08:58.665076 | orchestrator | Saturday 05 July 2025 23:07:20 +0000 (0:00:00.247) 0:00:04.841 ********* 2025-07-05 23:08:58.665085 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.665095 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.665105 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.665115 | orchestrator | 2025-07-05 23:08:58.665124 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-05 23:08:58.665135 | orchestrator | Saturday 05 July 2025 23:07:21 +0000 (0:00:00.289) 0:00:05.131 ********* 2025-07-05 23:08:58.665144 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665154 | orchestrator | 2025-07-05 23:08:58.665164 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-05 23:08:58.665174 | orchestrator | Saturday 05 July 2025 23:07:21 +0000 (0:00:00.240) 0:00:05.371 ********* 2025-07-05 23:08:58.665183 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665193 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.665203 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.665213 | orchestrator | 2025-07-05 23:08:58.665223 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-05 23:08:58.665232 | orchestrator | Saturday 05 July 2025 23:07:21 +0000 (0:00:00.272) 0:00:05.644 ********* 2025-07-05 23:08:58.665242 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.665252 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.665261 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.665271 | orchestrator | 2025-07-05 23:08:58.665281 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-05 23:08:58.665291 | orchestrator | Saturday 05 July 2025 23:07:22 +0000 (0:00:00.287) 0:00:05.932 ********* 2025-07-05 23:08:58.665301 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665310 | orchestrator | 2025-07-05 23:08:58.665320 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-05 23:08:58.665330 | orchestrator | Saturday 05 July 2025 23:07:22 +0000 (0:00:00.115) 0:00:06.047 ********* 2025-07-05 23:08:58.665339 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665349 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.665359 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.665369 | orchestrator | 2025-07-05 23:08:58.665379 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-05 23:08:58.665389 | orchestrator | Saturday 05 July 2025 23:07:22 +0000 (0:00:00.263) 0:00:06.311 ********* 2025-07-05 23:08:58.665398 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.665408 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.665418 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.665427 | orchestrator | 2025-07-05 23:08:58.665437 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-05 23:08:58.665447 | orchestrator | Saturday 05 July 2025 23:07:22 +0000 (0:00:00.401) 0:00:06.712 ********* 2025-07-05 23:08:58.665456 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665466 | orchestrator | 2025-07-05 23:08:58.665476 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-05 23:08:58.665485 | orchestrator | Saturday 05 July 2025 23:07:22 +0000 (0:00:00.130) 0:00:06.843 ********* 2025-07-05 23:08:58.665495 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665505 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.665515 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.665525 | orchestrator | 2025-07-05 23:08:58.665534 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-05 23:08:58.665544 | orchestrator | Saturday 05 July 2025 23:07:23 +0000 (0:00:00.271) 0:00:07.114 ********* 2025-07-05 23:08:58.665633 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.665653 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.665663 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.665673 | orchestrator | 2025-07-05 23:08:58.665683 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-05 23:08:58.665699 | orchestrator | Saturday 05 July 2025 23:07:23 +0000 (0:00:00.285) 0:00:07.400 ********* 2025-07-05 23:08:58.665709 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665719 | orchestrator | 2025-07-05 23:08:58.665729 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-05 23:08:58.665739 | orchestrator | Saturday 05 July 2025 23:07:23 +0000 (0:00:00.114) 0:00:07.515 ********* 2025-07-05 23:08:58.665748 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665758 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.665783 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.665794 | orchestrator | 2025-07-05 23:08:58.665803 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-05 23:08:58.665813 | orchestrator | Saturday 05 July 2025 23:07:24 +0000 (0:00:00.402) 0:00:07.917 ********* 2025-07-05 23:08:58.665823 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.665833 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.665842 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.665852 | orchestrator | 2025-07-05 23:08:58.665870 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-05 23:08:58.665880 | orchestrator | Saturday 05 July 2025 23:07:24 +0000 (0:00:00.278) 0:00:08.196 ********* 2025-07-05 23:08:58.665890 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665900 | orchestrator | 2025-07-05 23:08:58.665910 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-05 23:08:58.665920 | orchestrator | Saturday 05 July 2025 23:07:24 +0000 (0:00:00.130) 0:00:08.327 ********* 2025-07-05 23:08:58.665929 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.665939 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.665949 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.665959 | orchestrator | 2025-07-05 23:08:58.665969 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-05 23:08:58.665979 | orchestrator | Saturday 05 July 2025 23:07:24 +0000 (0:00:00.262) 0:00:08.589 ********* 2025-07-05 23:08:58.665988 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.665998 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.666008 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.666063 | orchestrator | 2025-07-05 23:08:58.666073 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-05 23:08:58.666084 | orchestrator | Saturday 05 July 2025 23:07:24 +0000 (0:00:00.262) 0:00:08.852 ********* 2025-07-05 23:08:58.666093 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.666103 | orchestrator | 2025-07-05 23:08:58.666113 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-05 23:08:58.666123 | orchestrator | Saturday 05 July 2025 23:07:25 +0000 (0:00:00.102) 0:00:08.955 ********* 2025-07-05 23:08:58.666133 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.666143 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.666153 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.666163 | orchestrator | 2025-07-05 23:08:58.666178 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-05 23:08:58.666188 | orchestrator | Saturday 05 July 2025 23:07:25 +0000 (0:00:00.450) 0:00:09.405 ********* 2025-07-05 23:08:58.666198 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.666208 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.666218 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.666228 | orchestrator | 2025-07-05 23:08:58.666238 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-05 23:08:58.666247 | orchestrator | Saturday 05 July 2025 23:07:25 +0000 (0:00:00.274) 0:00:09.680 ********* 2025-07-05 23:08:58.666257 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.666267 | orchestrator | 2025-07-05 23:08:58.666277 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-05 23:08:58.666287 | orchestrator | Saturday 05 July 2025 23:07:25 +0000 (0:00:00.097) 0:00:09.778 ********* 2025-07-05 23:08:58.666297 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.666313 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.666323 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.666333 | orchestrator | 2025-07-05 23:08:58.666343 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-05 23:08:58.666353 | orchestrator | Saturday 05 July 2025 23:07:26 +0000 (0:00:00.323) 0:00:10.102 ********* 2025-07-05 23:08:58.666363 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:08:58.666372 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:08:58.666382 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:08:58.666392 | orchestrator | 2025-07-05 23:08:58.666402 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-05 23:08:58.666412 | orchestrator | Saturday 05 July 2025 23:07:26 +0000 (0:00:00.456) 0:00:10.558 ********* 2025-07-05 23:08:58.666422 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.666432 | orchestrator | 2025-07-05 23:08:58.666442 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-05 23:08:58.666452 | orchestrator | Saturday 05 July 2025 23:07:26 +0000 (0:00:00.115) 0:00:10.674 ********* 2025-07-05 23:08:58.666462 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.666471 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.666481 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.666491 | orchestrator | 2025-07-05 23:08:58.666501 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-07-05 23:08:58.666511 | orchestrator | Saturday 05 July 2025 23:07:27 +0000 (0:00:00.271) 0:00:10.946 ********* 2025-07-05 23:08:58.666521 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:08:58.666530 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:08:58.666540 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:08:58.666550 | orchestrator | 2025-07-05 23:08:58.666560 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-07-05 23:08:58.666570 | orchestrator | Saturday 05 July 2025 23:07:28 +0000 (0:00:01.592) 0:00:12.539 ********* 2025-07-05 23:08:58.666580 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-05 23:08:58.666590 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-05 23:08:58.666600 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-05 23:08:58.666609 | orchestrator | 2025-07-05 23:08:58.666619 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-07-05 23:08:58.666629 | orchestrator | Saturday 05 July 2025 23:07:30 +0000 (0:00:01.570) 0:00:14.109 ********* 2025-07-05 23:08:58.666639 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-05 23:08:58.666649 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-05 23:08:58.666659 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-05 23:08:58.666669 | orchestrator | 2025-07-05 23:08:58.666679 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-07-05 23:08:58.666689 | orchestrator | Saturday 05 July 2025 23:07:32 +0000 (0:00:02.135) 0:00:16.244 ********* 2025-07-05 23:08:58.666705 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-05 23:08:58.666716 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-05 23:08:58.666726 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-05 23:08:58.666735 | orchestrator | 2025-07-05 23:08:58.666745 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-07-05 23:08:58.666755 | orchestrator | Saturday 05 July 2025 23:07:33 +0000 (0:00:01.593) 0:00:17.838 ********* 2025-07-05 23:08:58.666783 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.666793 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.666810 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.666820 | orchestrator | 2025-07-05 23:08:58.666829 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-07-05 23:08:58.666839 | orchestrator | Saturday 05 July 2025 23:07:34 +0000 (0:00:00.253) 0:00:18.092 ********* 2025-07-05 23:08:58.666849 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.666859 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.666869 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.666878 | orchestrator | 2025-07-05 23:08:58.666888 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-05 23:08:58.666898 | orchestrator | Saturday 05 July 2025 23:07:34 +0000 (0:00:00.241) 0:00:18.333 ********* 2025-07-05 23:08:58.666909 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:08:58.666926 | orchestrator | 2025-07-05 23:08:58.666941 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-07-05 23:08:58.666965 | orchestrator | Saturday 05 July 2025 23:07:35 +0000 (0:00:00.658) 0:00:18.991 ********* 2025-07-05 23:08:58.666986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:08:58.667028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:08:58.667066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:08:58.667085 | orchestrator | 2025-07-05 23:08:58.667096 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-07-05 23:08:58.667106 | orchestrator | Saturday 05 July 2025 23:07:36 +0000 (0:00:01.488) 0:00:20.480 ********* 2025-07-05 23:08:58.667123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE2025-07-05 23:08:58 | INFO  | Task cfce505f-d671-424d-8cc1-b13ffc5a4fcf is in state SUCCESS 2025-07-05 23:08:58.667147 | orchestrator | ': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-05 23:08:58.667159 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.667177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-05 23:08:58.667195 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.667212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-05 23:08:58.667223 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.667233 | orchestrator | 2025-07-05 23:08:58.667243 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-07-05 23:08:58.667253 | orchestrator | Saturday 05 July 2025 23:07:37 +0000 (0:00:00.723) 0:00:21.204 ********* 2025-07-05 23:08:58.667271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-05 23:08:58.667288 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.667304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-05 23:08:58.667315 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.667333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-05 23:08:58.667354 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.667364 | orchestrator | 2025-07-05 23:08:58.667374 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-07-05 23:08:58.667384 | orchestrator | Saturday 05 July 2025 23:07:38 +0000 (0:00:01.082) 0:00:22.287 ********* 2025-07-05 23:08:58.667400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:08:58.667424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:08:58.667443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-05 23:08:58.667460 | orchestrator | 2025-07-05 23:08:58.667470 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-05 23:08:58.667480 | orchestrator | Saturday 05 July 2025 23:07:39 +0000 (0:00:01.473) 0:00:23.760 ********* 2025-07-05 23:08:58.667490 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:08:58.667500 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:08:58.667509 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:08:58.667519 | orchestrator | 2025-07-05 23:08:58.667529 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-05 23:08:58.667539 | orchestrator | Saturday 05 July 2025 23:07:40 +0000 (0:00:00.288) 0:00:24.049 ********* 2025-07-05 23:08:58.667549 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:08:58.667558 | orchestrator | 2025-07-05 23:08:58.667568 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-07-05 23:08:58.667584 | orchestrator | Saturday 05 July 2025 23:07:40 +0000 (0:00:00.732) 0:00:24.781 ********* 2025-07-05 23:08:58.667594 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:08:58.667604 | orchestrator | 2025-07-05 23:08:58.667614 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-07-05 23:08:58.667624 | orchestrator | Saturday 05 July 2025 23:07:43 +0000 (0:00:02.194) 0:00:26.976 ********* 2025-07-05 23:08:58.667634 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:08:58.667644 | orchestrator | 2025-07-05 23:08:58.667654 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-07-05 23:08:58.667664 | orchestrator | Saturday 05 July 2025 23:07:45 +0000 (0:00:02.200) 0:00:29.176 ********* 2025-07-05 23:08:58.667673 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:08:58.667683 | orchestrator | 2025-07-05 23:08:58.667693 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-05 23:08:58.667703 | orchestrator | Saturday 05 July 2025 23:08:00 +0000 (0:00:15.709) 0:00:44.886 ********* 2025-07-05 23:08:58.667713 | orchestrator | 2025-07-05 23:08:58.667723 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-05 23:08:58.667732 | orchestrator | Saturday 05 July 2025 23:08:01 +0000 (0:00:00.065) 0:00:44.951 ********* 2025-07-05 23:08:58.667742 | orchestrator | 2025-07-05 23:08:58.667752 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-05 23:08:58.667762 | orchestrator | Saturday 05 July 2025 23:08:01 +0000 (0:00:00.065) 0:00:45.016 ********* 2025-07-05 23:08:58.667798 | orchestrator | 2025-07-05 23:08:58.667809 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-07-05 23:08:58.667818 | orchestrator | Saturday 05 July 2025 23:08:01 +0000 (0:00:00.064) 0:00:45.081 ********* 2025-07-05 23:08:58.667828 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:08:58.667838 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:08:58.667848 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:08:58.667858 | orchestrator | 2025-07-05 23:08:58.667872 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:08:58.667883 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-07-05 23:08:58.667893 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-05 23:08:58.667903 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-05 23:08:58.667913 | orchestrator | 2025-07-05 23:08:58.667922 | orchestrator | 2025-07-05 23:08:58.667932 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:08:58.667942 | orchestrator | Saturday 05 July 2025 23:08:57 +0000 (0:00:56.209) 0:01:41.291 ********* 2025-07-05 23:08:58.667952 | orchestrator | =============================================================================== 2025-07-05 23:08:58.667968 | orchestrator | horizon : Restart horizon container ------------------------------------ 56.21s 2025-07-05 23:08:58.667978 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.71s 2025-07-05 23:08:58.667987 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.20s 2025-07-05 23:08:58.667997 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.19s 2025-07-05 23:08:58.668007 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.14s 2025-07-05 23:08:58.668016 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.59s 2025-07-05 23:08:58.668026 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.59s 2025-07-05 23:08:58.668036 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.57s 2025-07-05 23:08:58.668046 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.49s 2025-07-05 23:08:58.668055 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.47s 2025-07-05 23:08:58.668065 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.08s 2025-07-05 23:08:58.668075 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.05s 2025-07-05 23:08:58.668085 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2025-07-05 23:08:58.668094 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.72s 2025-07-05 23:08:58.668104 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-07-05 23:08:58.668114 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2025-07-05 23:08:58.668123 | orchestrator | horizon : Update policy file name --------------------------------------- 0.46s 2025-07-05 23:08:58.668133 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2025-07-05 23:08:58.668143 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.44s 2025-07-05 23:08:58.668153 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.40s 2025-07-05 23:08:58.668163 | orchestrator | 2025-07-05 23:08:58 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:08:58.668173 | orchestrator | 2025-07-05 23:08:58 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:01.705109 | orchestrator | 2025-07-05 23:09:01 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:01.706809 | orchestrator | 2025-07-05 23:09:01 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:01.706846 | orchestrator | 2025-07-05 23:09:01 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:04.756567 | orchestrator | 2025-07-05 23:09:04 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:04.758926 | orchestrator | 2025-07-05 23:09:04 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:04.759056 | orchestrator | 2025-07-05 23:09:04 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:07.795758 | orchestrator | 2025-07-05 23:09:07 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:07.796305 | orchestrator | 2025-07-05 23:09:07 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:07.796334 | orchestrator | 2025-07-05 23:09:07 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:10.841311 | orchestrator | 2025-07-05 23:09:10 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:10.844041 | orchestrator | 2025-07-05 23:09:10 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:10.844123 | orchestrator | 2025-07-05 23:09:10 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:13.894990 | orchestrator | 2025-07-05 23:09:13 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:13.896915 | orchestrator | 2025-07-05 23:09:13 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:13.896965 | orchestrator | 2025-07-05 23:09:13 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:16.925735 | orchestrator | 2025-07-05 23:09:16 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:16.926374 | orchestrator | 2025-07-05 23:09:16 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:16.926413 | orchestrator | 2025-07-05 23:09:16 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:19.962848 | orchestrator | 2025-07-05 23:09:19 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:19.964933 | orchestrator | 2025-07-05 23:09:19 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:19.965182 | orchestrator | 2025-07-05 23:09:19 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:23.012365 | orchestrator | 2025-07-05 23:09:23 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:23.014475 | orchestrator | 2025-07-05 23:09:23 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:23.014556 | orchestrator | 2025-07-05 23:09:23 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:26.053581 | orchestrator | 2025-07-05 23:09:26 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:26.055617 | orchestrator | 2025-07-05 23:09:26 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:26.055681 | orchestrator | 2025-07-05 23:09:26 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:29.095765 | orchestrator | 2025-07-05 23:09:29 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:29.097675 | orchestrator | 2025-07-05 23:09:29 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:29.097751 | orchestrator | 2025-07-05 23:09:29 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:32.140859 | orchestrator | 2025-07-05 23:09:32 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:32.142134 | orchestrator | 2025-07-05 23:09:32 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:32.142171 | orchestrator | 2025-07-05 23:09:32 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:35.191213 | orchestrator | 2025-07-05 23:09:35 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:35.193839 | orchestrator | 2025-07-05 23:09:35 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:35.193888 | orchestrator | 2025-07-05 23:09:35 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:38.236916 | orchestrator | 2025-07-05 23:09:38 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:38.238680 | orchestrator | 2025-07-05 23:09:38 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:38.238716 | orchestrator | 2025-07-05 23:09:38 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:41.284312 | orchestrator | 2025-07-05 23:09:41 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:41.286206 | orchestrator | 2025-07-05 23:09:41 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:41.286361 | orchestrator | 2025-07-05 23:09:41 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:44.329937 | orchestrator | 2025-07-05 23:09:44 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:44.331352 | orchestrator | 2025-07-05 23:09:44 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:44.331392 | orchestrator | 2025-07-05 23:09:44 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:47.374848 | orchestrator | 2025-07-05 23:09:47 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:47.375065 | orchestrator | 2025-07-05 23:09:47 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:47.376320 | orchestrator | 2025-07-05 23:09:47 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:50.414135 | orchestrator | 2025-07-05 23:09:50 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:50.416436 | orchestrator | 2025-07-05 23:09:50 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state STARTED 2025-07-05 23:09:50.416471 | orchestrator | 2025-07-05 23:09:50 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:53.467776 | orchestrator | 2025-07-05 23:09:53 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:53.469619 | orchestrator | 2025-07-05 23:09:53 | INFO  | Task b938d70f-ea00-4144-805a-df4b48967200 is in state SUCCESS 2025-07-05 23:09:53.473039 | orchestrator | 2025-07-05 23:09:53 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:09:53.474237 | orchestrator | 2025-07-05 23:09:53 | INFO  | Task 3a9841d8-0e94-45f3-af9c-b29fc2fd1ca4 is in state STARTED 2025-07-05 23:09:53.475996 | orchestrator | 2025-07-05 23:09:53 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:09:53.476035 | orchestrator | 2025-07-05 23:09:53 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:56.528500 | orchestrator | 2025-07-05 23:09:56 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:56.529656 | orchestrator | 2025-07-05 23:09:56 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:09:56.534263 | orchestrator | 2025-07-05 23:09:56 | INFO  | Task 3a9841d8-0e94-45f3-af9c-b29fc2fd1ca4 is in state STARTED 2025-07-05 23:09:56.535080 | orchestrator | 2025-07-05 23:09:56 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:09:56.535129 | orchestrator | 2025-07-05 23:09:56 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:09:59.566777 | orchestrator | 2025-07-05 23:09:59 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:09:59.566973 | orchestrator | 2025-07-05 23:09:59 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state STARTED 2025-07-05 23:09:59.567852 | orchestrator | 2025-07-05 23:09:59 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:09:59.568583 | orchestrator | 2025-07-05 23:09:59 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:09:59.569474 | orchestrator | 2025-07-05 23:09:59 | INFO  | Task 3a9841d8-0e94-45f3-af9c-b29fc2fd1ca4 is in state SUCCESS 2025-07-05 23:09:59.570348 | orchestrator | 2025-07-05 23:09:59 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:09:59.571321 | orchestrator | 2025-07-05 23:09:59 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:02.611660 | orchestrator | 2025-07-05 23:10:02 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:02.613942 | orchestrator | 2025-07-05 23:10:02 | INFO  | Task dc165d83-56ab-4fb2-aad0-3e1f17aa2219 is in state SUCCESS 2025-07-05 23:10:02.615112 | orchestrator | 2025-07-05 23:10:02.615151 | orchestrator | 2025-07-05 23:10:02.615164 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-07-05 23:10:02.615178 | orchestrator | 2025-07-05 23:10:02.615190 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-07-05 23:10:02.615202 | orchestrator | Saturday 05 July 2025 23:08:58 +0000 (0:00:00.211) 0:00:00.211 ********* 2025-07-05 23:10:02.615214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-07-05 23:10:02.615228 | orchestrator | 2025-07-05 23:10:02.615239 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-07-05 23:10:02.615251 | orchestrator | Saturday 05 July 2025 23:08:58 +0000 (0:00:00.204) 0:00:00.416 ********* 2025-07-05 23:10:02.615263 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-07-05 23:10:02.615275 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-07-05 23:10:02.615287 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-07-05 23:10:02.615300 | orchestrator | 2025-07-05 23:10:02.615311 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-07-05 23:10:02.615323 | orchestrator | Saturday 05 July 2025 23:08:59 +0000 (0:00:01.122) 0:00:01.538 ********* 2025-07-05 23:10:02.615334 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-07-05 23:10:02.615346 | orchestrator | 2025-07-05 23:10:02.615358 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-07-05 23:10:02.615369 | orchestrator | Saturday 05 July 2025 23:09:00 +0000 (0:00:01.146) 0:00:02.685 ********* 2025-07-05 23:10:02.615381 | orchestrator | changed: [testbed-manager] 2025-07-05 23:10:02.615393 | orchestrator | 2025-07-05 23:10:02.615404 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-07-05 23:10:02.615416 | orchestrator | Saturday 05 July 2025 23:09:01 +0000 (0:00:01.069) 0:00:03.754 ********* 2025-07-05 23:10:02.615427 | orchestrator | changed: [testbed-manager] 2025-07-05 23:10:02.615439 | orchestrator | 2025-07-05 23:10:02.615450 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-07-05 23:10:02.615462 | orchestrator | Saturday 05 July 2025 23:09:02 +0000 (0:00:00.802) 0:00:04.557 ********* 2025-07-05 23:10:02.615984 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-07-05 23:10:02.616009 | orchestrator | ok: [testbed-manager] 2025-07-05 23:10:02.616021 | orchestrator | 2025-07-05 23:10:02.616032 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-07-05 23:10:02.616043 | orchestrator | Saturday 05 July 2025 23:09:41 +0000 (0:00:38.466) 0:00:43.023 ********* 2025-07-05 23:10:02.616055 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-07-05 23:10:02.616066 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-07-05 23:10:02.616077 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-07-05 23:10:02.616088 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-07-05 23:10:02.616099 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-07-05 23:10:02.616111 | orchestrator | 2025-07-05 23:10:02.616121 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-07-05 23:10:02.616133 | orchestrator | Saturday 05 July 2025 23:09:45 +0000 (0:00:03.970) 0:00:46.993 ********* 2025-07-05 23:10:02.616144 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-07-05 23:10:02.616155 | orchestrator | 2025-07-05 23:10:02.616166 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-07-05 23:10:02.616178 | orchestrator | Saturday 05 July 2025 23:09:45 +0000 (0:00:00.461) 0:00:47.455 ********* 2025-07-05 23:10:02.616213 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:10:02.616225 | orchestrator | 2025-07-05 23:10:02.616236 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-07-05 23:10:02.616247 | orchestrator | Saturday 05 July 2025 23:09:45 +0000 (0:00:00.145) 0:00:47.600 ********* 2025-07-05 23:10:02.616258 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:10:02.616269 | orchestrator | 2025-07-05 23:10:02.616281 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-07-05 23:10:02.616292 | orchestrator | Saturday 05 July 2025 23:09:45 +0000 (0:00:00.337) 0:00:47.937 ********* 2025-07-05 23:10:02.616303 | orchestrator | changed: [testbed-manager] 2025-07-05 23:10:02.616314 | orchestrator | 2025-07-05 23:10:02.616325 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-07-05 23:10:02.616410 | orchestrator | Saturday 05 July 2025 23:09:47 +0000 (0:00:01.658) 0:00:49.596 ********* 2025-07-05 23:10:02.616425 | orchestrator | changed: [testbed-manager] 2025-07-05 23:10:02.616435 | orchestrator | 2025-07-05 23:10:02.616445 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-07-05 23:10:02.616455 | orchestrator | Saturday 05 July 2025 23:09:48 +0000 (0:00:00.738) 0:00:50.334 ********* 2025-07-05 23:10:02.616465 | orchestrator | changed: [testbed-manager] 2025-07-05 23:10:02.616474 | orchestrator | 2025-07-05 23:10:02.616484 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-07-05 23:10:02.616494 | orchestrator | Saturday 05 July 2025 23:09:48 +0000 (0:00:00.602) 0:00:50.937 ********* 2025-07-05 23:10:02.616504 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-07-05 23:10:02.616514 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-07-05 23:10:02.616581 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-07-05 23:10:02.616595 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-07-05 23:10:02.616605 | orchestrator | 2025-07-05 23:10:02.616615 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:10:02.616626 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:10:02.616636 | orchestrator | 2025-07-05 23:10:02.616646 | orchestrator | 2025-07-05 23:10:02.616692 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:10:02.616704 | orchestrator | Saturday 05 July 2025 23:09:50 +0000 (0:00:01.391) 0:00:52.328 ********* 2025-07-05 23:10:02.616714 | orchestrator | =============================================================================== 2025-07-05 23:10:02.616723 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.47s 2025-07-05 23:10:02.616733 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.97s 2025-07-05 23:10:02.616743 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.66s 2025-07-05 23:10:02.616753 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.39s 2025-07-05 23:10:02.616763 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2025-07-05 23:10:02.616772 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.12s 2025-07-05 23:10:02.616822 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.07s 2025-07-05 23:10:02.616833 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.80s 2025-07-05 23:10:02.616843 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2025-07-05 23:10:02.616853 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2025-07-05 23:10:02.616863 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2025-07-05 23:10:02.616872 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.34s 2025-07-05 23:10:02.616882 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-07-05 23:10:02.616902 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-07-05 23:10:02.616912 | orchestrator | 2025-07-05 23:10:02.616922 | orchestrator | 2025-07-05 23:10:02.616932 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:10:02.616942 | orchestrator | 2025-07-05 23:10:02.616951 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:10:02.616961 | orchestrator | Saturday 05 July 2025 23:09:54 +0000 (0:00:00.191) 0:00:00.191 ********* 2025-07-05 23:10:02.616971 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:10:02.616981 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:10:02.616998 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:10:02.617008 | orchestrator | 2025-07-05 23:10:02.617018 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:10:02.617028 | orchestrator | Saturday 05 July 2025 23:09:55 +0000 (0:00:00.291) 0:00:00.482 ********* 2025-07-05 23:10:02.617038 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-05 23:10:02.617048 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-05 23:10:02.617057 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-05 23:10:02.617067 | orchestrator | 2025-07-05 23:10:02.617077 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-07-05 23:10:02.617087 | orchestrator | 2025-07-05 23:10:02.617097 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-07-05 23:10:02.617107 | orchestrator | Saturday 05 July 2025 23:09:55 +0000 (0:00:00.627) 0:00:01.110 ********* 2025-07-05 23:10:02.617116 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:10:02.617126 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:10:02.617136 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:10:02.617146 | orchestrator | 2025-07-05 23:10:02.617156 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:10:02.617167 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:10:02.617177 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:10:02.617187 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:10:02.617197 | orchestrator | 2025-07-05 23:10:02.617207 | orchestrator | 2025-07-05 23:10:02.617216 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:10:02.617226 | orchestrator | Saturday 05 July 2025 23:09:56 +0000 (0:00:00.687) 0:00:01.797 ********* 2025-07-05 23:10:02.617236 | orchestrator | =============================================================================== 2025-07-05 23:10:02.617246 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.69s 2025-07-05 23:10:02.617255 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-07-05 23:10:02.617265 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-07-05 23:10:02.617275 | orchestrator | 2025-07-05 23:10:02.617284 | orchestrator | 2025-07-05 23:10:02.617294 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:10:02.617304 | orchestrator | 2025-07-05 23:10:02.617314 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:10:02.617324 | orchestrator | Saturday 05 July 2025 23:07:16 +0000 (0:00:00.229) 0:00:00.229 ********* 2025-07-05 23:10:02.617333 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:10:02.617343 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:10:02.617353 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:10:02.617363 | orchestrator | 2025-07-05 23:10:02.617373 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:10:02.617382 | orchestrator | Saturday 05 July 2025 23:07:16 +0000 (0:00:00.257) 0:00:00.487 ********* 2025-07-05 23:10:02.617392 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-05 23:10:02.617408 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-05 23:10:02.617418 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-05 23:10:02.617428 | orchestrator | 2025-07-05 23:10:02.617438 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-07-05 23:10:02.617447 | orchestrator | 2025-07-05 23:10:02.617491 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-05 23:10:02.617503 | orchestrator | Saturday 05 July 2025 23:07:16 +0000 (0:00:00.343) 0:00:00.831 ********* 2025-07-05 23:10:02.617513 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:10:02.617523 | orchestrator | 2025-07-05 23:10:02.617533 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-07-05 23:10:02.617542 | orchestrator | Saturday 05 July 2025 23:07:17 +0000 (0:00:00.504) 0:00:01.335 ********* 2025-07-05 23:10:02.617558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.617579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.617592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.617610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.617653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.617665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.617680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.617691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.617702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.617712 | orchestrator | 2025-07-05 23:10:02.617722 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-07-05 23:10:02.617733 | orchestrator | Saturday 05 July 2025 23:07:19 +0000 (0:00:01.719) 0:00:03.055 ********* 2025-07-05 23:10:02.617761 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-07-05 23:10:02.617771 | orchestrator | 2025-07-05 23:10:02.617803 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-07-05 23:10:02.617813 | orchestrator | Saturday 05 July 2025 23:07:19 +0000 (0:00:00.804) 0:00:03.859 ********* 2025-07-05 23:10:02.617823 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:10:02.617833 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:10:02.617843 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:10:02.617853 | orchestrator | 2025-07-05 23:10:02.617863 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-07-05 23:10:02.617872 | orchestrator | Saturday 05 July 2025 23:07:20 +0000 (0:00:00.357) 0:00:04.217 ********* 2025-07-05 23:10:02.617882 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 23:10:02.617892 | orchestrator | 2025-07-05 23:10:02.617902 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-05 23:10:02.617912 | orchestrator | Saturday 05 July 2025 23:07:20 +0000 (0:00:00.590) 0:00:04.807 ********* 2025-07-05 23:10:02.617922 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:10:02.617932 | orchestrator | 2025-07-05 23:10:02.617947 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-07-05 23:10:02.617957 | orchestrator | Saturday 05 July 2025 23:07:21 +0000 (0:00:00.464) 0:00:05.272 ********* 2025-07-05 23:10:02.617968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.617984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.617996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.618013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618140 | orchestrator | 2025-07-05 23:10:02.618150 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-07-05 23:10:02.618161 | orchestrator | Saturday 05 July 2025 23:07:24 +0000 (0:00:03.272) 0:00:08.545 ********* 2025-07-05 23:10:02.618171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-05 23:10:02.618189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.618200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:10:02.618211 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.618226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-05 23:10:02.618238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.618254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:10:02.618265 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:10:02.618281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-05 23:10:02.618292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.618303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:10:02.618313 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:10:02.618323 | orchestrator | 2025-07-05 23:10:02.618338 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-07-05 23:10:02.618348 | orchestrator | Saturday 05 July 2025 23:07:25 +0000 (0:00:00.512) 0:00:09.057 ********* 2025-07-05 23:10:02.618359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-05 23:10:02.618376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.618387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:10:02.618397 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.618414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-05 23:10:02.618426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.618441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:10:02.618457 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:10:02.618468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-05 23:10:02.618479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.618495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-05 23:10:02.618506 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:10:02.618516 | orchestrator | 2025-07-05 23:10:02.618526 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-07-05 23:10:02.618536 | orchestrator | Saturday 05 July 2025 23:07:25 +0000 (0:00:00.769) 0:00:09.827 ********* 2025-07-05 23:10:02.618547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.618568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.618580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.618596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618669 | orchestrator | 2025-07-05 23:10:02.618679 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-07-05 23:10:02.618690 | orchestrator | Saturday 05 July 2025 23:07:29 +0000 (0:00:03.356) 0:00:13.183 ********* 2025-07-05 23:10:02.618708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.618719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.618737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.618754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.618765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.618776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.618822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.618891 | orchestrator | 2025-07-05 23:10:02.618906 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-07-05 23:10:02.618917 | orchestrator | Saturday 05 July 2025 23:07:33 +0000 (0:00:04.596) 0:00:17.779 ********* 2025-07-05 23:10:02.618950 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:10:02.618960 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:10:02.618970 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:10:02.618980 | orchestrator | 2025-07-05 23:10:02.618990 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-07-05 23:10:02.619000 | orchestrator | Saturday 05 July 2025 23:07:35 +0000 (0:00:01.383) 0:00:19.163 ********* 2025-07-05 23:10:02.619009 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.619019 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:10:02.619029 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:10:02.619039 | orchestrator | 2025-07-05 23:10:02.619048 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-07-05 23:10:02.619058 | orchestrator | Saturday 05 July 2025 23:07:35 +0000 (0:00:00.622) 0:00:19.786 ********* 2025-07-05 23:10:02.619068 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.619077 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:10:02.619087 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:10:02.619096 | orchestrator | 2025-07-05 23:10:02.619106 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-07-05 23:10:02.619116 | orchestrator | Saturday 05 July 2025 23:07:36 +0000 (0:00:00.441) 0:00:20.228 ********* 2025-07-05 23:10:02.619126 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.619135 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:10:02.619145 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:10:02.619155 | orchestrator | 2025-07-05 23:10:02.619165 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-07-05 23:10:02.619174 | orchestrator | Saturday 05 July 2025 23:07:36 +0000 (0:00:00.290) 0:00:20.518 ********* 2025-07-05 23:10:02.619185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.619209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.619221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.619237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.619248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.619259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-05 23:10:02.619277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.619293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.619304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.619314 | orchestrator | 2025-07-05 23:10:02.619329 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-05 23:10:02.619339 | orchestrator | Saturday 05 July 2025 23:07:39 +0000 (0:00:02.533) 0:00:23.052 ********* 2025-07-05 23:10:02.619349 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.619358 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:10:02.619368 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:10:02.619378 | orchestrator | 2025-07-05 23:10:02.619388 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-07-05 23:10:02.619398 | orchestrator | Saturday 05 July 2025 23:07:39 +0000 (0:00:00.309) 0:00:23.361 ********* 2025-07-05 23:10:02.619408 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-05 23:10:02.619418 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-05 23:10:02.619428 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-05 23:10:02.619438 | orchestrator | 2025-07-05 23:10:02.619447 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-07-05 23:10:02.619457 | orchestrator | Saturday 05 July 2025 23:07:41 +0000 (0:00:01.988) 0:00:25.349 ********* 2025-07-05 23:10:02.619467 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 23:10:02.619477 | orchestrator | 2025-07-05 23:10:02.619487 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-07-05 23:10:02.619497 | orchestrator | Saturday 05 July 2025 23:07:42 +0000 (0:00:00.889) 0:00:26.239 ********* 2025-07-05 23:10:02.619507 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.619516 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:10:02.619526 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:10:02.619536 | orchestrator | 2025-07-05 23:10:02.619546 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-07-05 23:10:02.619556 | orchestrator | Saturday 05 July 2025 23:07:42 +0000 (0:00:00.523) 0:00:26.763 ********* 2025-07-05 23:10:02.619565 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 23:10:02.619575 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-05 23:10:02.619591 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-05 23:10:02.619600 | orchestrator | 2025-07-05 23:10:02.619610 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-07-05 23:10:02.619620 | orchestrator | Saturday 05 July 2025 23:07:43 +0000 (0:00:01.028) 0:00:27.791 ********* 2025-07-05 23:10:02.619630 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:10:02.619640 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:10:02.619649 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:10:02.619659 | orchestrator | 2025-07-05 23:10:02.619669 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-07-05 23:10:02.619679 | orchestrator | Saturday 05 July 2025 23:07:44 +0000 (0:00:00.296) 0:00:28.088 ********* 2025-07-05 23:10:02.619688 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-05 23:10:02.619698 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-05 23:10:02.619708 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-05 23:10:02.619718 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-05 23:10:02.619728 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-05 23:10:02.619742 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-05 23:10:02.619753 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-05 23:10:02.619763 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-05 23:10:02.619772 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-05 23:10:02.619798 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-05 23:10:02.619808 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-05 23:10:02.619818 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-05 23:10:02.619828 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-05 23:10:02.619837 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-05 23:10:02.619847 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-05 23:10:02.619857 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-05 23:10:02.619867 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-05 23:10:02.619877 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-05 23:10:02.619887 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-05 23:10:02.619896 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-05 23:10:02.619906 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-05 23:10:02.619916 | orchestrator | 2025-07-05 23:10:02.619925 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-07-05 23:10:02.619939 | orchestrator | Saturday 05 July 2025 23:07:53 +0000 (0:00:09.066) 0:00:37.154 ********* 2025-07-05 23:10:02.619949 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-05 23:10:02.619959 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-05 23:10:02.619969 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-05 23:10:02.619985 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-05 23:10:02.619995 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-05 23:10:02.620005 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-05 23:10:02.620015 | orchestrator | 2025-07-05 23:10:02.620025 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-07-05 23:10:02.620035 | orchestrator | Saturday 05 July 2025 23:07:55 +0000 (0:00:02.654) 0:00:39.808 ********* 2025-07-05 23:10:02.620046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.620064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.620076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-05 23:10:02.620091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.620107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.620118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-05 23:10:02.620128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.620145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.620156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-05 23:10:02.620166 | orchestrator | 2025-07-05 23:10:02.620176 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-05 23:10:02.620186 | orchestrator | Saturday 05 July 2025 23:07:58 +0000 (0:00:02.551) 0:00:42.359 ********* 2025-07-05 23:10:02.620196 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.620205 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:10:02.620215 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:10:02.620225 | orchestrator | 2025-07-05 23:10:02.620235 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-07-05 23:10:02.620244 | orchestrator | Saturday 05 July 2025 23:07:58 +0000 (0:00:00.303) 0:00:42.662 ********* 2025-07-05 23:10:02.620259 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:10:02.620269 | orchestrator | 2025-07-05 23:10:02.620279 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-07-05 23:10:02.620289 | orchestrator | Saturday 05 July 2025 23:08:01 +0000 (0:00:02.346) 0:00:45.009 ********* 2025-07-05 23:10:02.620298 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:10:02.620308 | orchestrator | 2025-07-05 23:10:02.620322 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-07-05 23:10:02.620332 | orchestrator | Saturday 05 July 2025 23:08:03 +0000 (0:00:02.487) 0:00:47.497 ********* 2025-07-05 23:10:02.620342 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:10:02.620351 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:10:02.620361 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:10:02.620371 | orchestrator | 2025-07-05 23:10:02.620381 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-07-05 23:10:02.620391 | orchestrator | Saturday 05 July 2025 23:08:04 +0000 (0:00:00.914) 0:00:48.412 ********* 2025-07-05 23:10:02.620400 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:10:02.620410 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:10:02.620420 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:10:02.620430 | orchestrator | 2025-07-05 23:10:02.620439 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-07-05 23:10:02.620449 | orchestrator | Saturday 05 July 2025 23:08:04 +0000 (0:00:00.354) 0:00:48.766 ********* 2025-07-05 23:10:02.620459 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.620469 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:10:02.620479 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:10:02.620488 | orchestrator | 2025-07-05 23:10:02.620498 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-07-05 23:10:02.620508 | orchestrator | Saturday 05 July 2025 23:08:05 +0000 (0:00:00.344) 0:00:49.110 ********* 2025-07-05 23:10:02.620518 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:10:02.620527 | orchestrator | 2025-07-05 23:10:02.620537 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-07-05 23:10:02.620547 | orchestrator | Saturday 05 July 2025 23:08:18 +0000 (0:00:13.315) 0:01:02.426 ********* 2025-07-05 23:10:02.620557 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:10:02.620566 | orchestrator | 2025-07-05 23:10:02.620576 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-05 23:10:02.620586 | orchestrator | Saturday 05 July 2025 23:08:28 +0000 (0:00:09.772) 0:01:12.198 ********* 2025-07-05 23:10:02.620596 | orchestrator | 2025-07-05 23:10:02.620606 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-05 23:10:02.620615 | orchestrator | Saturday 05 July 2025 23:08:28 +0000 (0:00:00.185) 0:01:12.384 ********* 2025-07-05 23:10:02.620625 | orchestrator | 2025-07-05 23:10:02.620635 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-05 23:10:02.620644 | orchestrator | Saturday 05 July 2025 23:08:28 +0000 (0:00:00.059) 0:01:12.443 ********* 2025-07-05 23:10:02.620654 | orchestrator | 2025-07-05 23:10:02.620664 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-07-05 23:10:02.620674 | orchestrator | Saturday 05 July 2025 23:08:28 +0000 (0:00:00.066) 0:01:12.510 ********* 2025-07-05 23:10:02.620683 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:10:02.620693 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:10:02.620702 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:10:02.620712 | orchestrator | 2025-07-05 23:10:02.620722 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-07-05 23:10:02.620732 | orchestrator | Saturday 05 July 2025 23:08:50 +0000 (0:00:22.327) 0:01:34.837 ********* 2025-07-05 23:10:02.620742 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:10:02.620751 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:10:02.620761 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:10:02.620771 | orchestrator | 2025-07-05 23:10:02.620812 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-07-05 23:10:02.620823 | orchestrator | Saturday 05 July 2025 23:09:01 +0000 (0:00:10.332) 0:01:45.170 ********* 2025-07-05 23:10:02.620833 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:10:02.620842 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:10:02.620857 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:10:02.620867 | orchestrator | 2025-07-05 23:10:02.620877 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-05 23:10:02.620887 | orchestrator | Saturday 05 July 2025 23:09:13 +0000 (0:00:12.535) 0:01:57.705 ********* 2025-07-05 23:10:02.620897 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:10:02.620907 | orchestrator | 2025-07-05 23:10:02.620917 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-07-05 23:10:02.620927 | orchestrator | Saturday 05 July 2025 23:09:14 +0000 (0:00:00.659) 0:01:58.365 ********* 2025-07-05 23:10:02.620937 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:10:02.620947 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:10:02.620957 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:10:02.620966 | orchestrator | 2025-07-05 23:10:02.620976 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-07-05 23:10:02.620986 | orchestrator | Saturday 05 July 2025 23:09:15 +0000 (0:00:00.705) 0:01:59.070 ********* 2025-07-05 23:10:02.620996 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:10:02.621006 | orchestrator | 2025-07-05 23:10:02.621016 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-07-05 23:10:02.621025 | orchestrator | Saturday 05 July 2025 23:09:16 +0000 (0:00:01.701) 0:02:00.772 ********* 2025-07-05 23:10:02.621035 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-07-05 23:10:02.621045 | orchestrator | 2025-07-05 23:10:02.621055 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-07-05 23:10:02.621065 | orchestrator | Saturday 05 July 2025 23:09:27 +0000 (0:00:10.551) 0:02:11.323 ********* 2025-07-05 23:10:02.621075 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-07-05 23:10:02.621084 | orchestrator | 2025-07-05 23:10:02.621094 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-07-05 23:10:02.621104 | orchestrator | Saturday 05 July 2025 23:09:49 +0000 (0:00:22.355) 0:02:33.679 ********* 2025-07-05 23:10:02.621114 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-07-05 23:10:02.621124 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-07-05 23:10:02.621133 | orchestrator | 2025-07-05 23:10:02.621148 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-07-05 23:10:02.621158 | orchestrator | Saturday 05 July 2025 23:09:56 +0000 (0:00:07.057) 0:02:40.737 ********* 2025-07-05 23:10:02.621168 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.621178 | orchestrator | 2025-07-05 23:10:02.621188 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-07-05 23:10:02.621198 | orchestrator | Saturday 05 July 2025 23:09:57 +0000 (0:00:00.498) 0:02:41.235 ********* 2025-07-05 23:10:02.621207 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.621217 | orchestrator | 2025-07-05 23:10:02.621227 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-07-05 23:10:02.621237 | orchestrator | Saturday 05 July 2025 23:09:57 +0000 (0:00:00.247) 0:02:41.482 ********* 2025-07-05 23:10:02.621246 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.621256 | orchestrator | 2025-07-05 23:10:02.621266 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-07-05 23:10:02.621276 | orchestrator | Saturday 05 July 2025 23:09:57 +0000 (0:00:00.245) 0:02:41.728 ********* 2025-07-05 23:10:02.621286 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.621296 | orchestrator | 2025-07-05 23:10:02.621306 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-07-05 23:10:02.621321 | orchestrator | Saturday 05 July 2025 23:09:58 +0000 (0:00:00.398) 0:02:42.126 ********* 2025-07-05 23:10:02.621331 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:10:02.621340 | orchestrator | 2025-07-05 23:10:02.621350 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-05 23:10:02.621360 | orchestrator | Saturday 05 July 2025 23:10:01 +0000 (0:00:03.309) 0:02:45.436 ********* 2025-07-05 23:10:02.621370 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:10:02.621379 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:10:02.621389 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:10:02.621399 | orchestrator | 2025-07-05 23:10:02.621409 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:10:02.621419 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-07-05 23:10:02.621430 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-05 23:10:02.621440 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-05 23:10:02.621450 | orchestrator | 2025-07-05 23:10:02.621459 | orchestrator | 2025-07-05 23:10:02.621469 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:10:02.621479 | orchestrator | Saturday 05 July 2025 23:10:01 +0000 (0:00:00.437) 0:02:45.874 ********* 2025-07-05 23:10:02.621489 | orchestrator | =============================================================================== 2025-07-05 23:10:02.621498 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.36s 2025-07-05 23:10:02.621508 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 22.33s 2025-07-05 23:10:02.621518 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.32s 2025-07-05 23:10:02.621528 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.54s 2025-07-05 23:10:02.621538 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.55s 2025-07-05 23:10:02.621552 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.33s 2025-07-05 23:10:02.621562 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.77s 2025-07-05 23:10:02.621572 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.07s 2025-07-05 23:10:02.621582 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.06s 2025-07-05 23:10:02.621592 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.60s 2025-07-05 23:10:02.621602 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.36s 2025-07-05 23:10:02.621611 | orchestrator | keystone : Creating default user role ----------------------------------- 3.31s 2025-07-05 23:10:02.621621 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.27s 2025-07-05 23:10:02.621631 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.65s 2025-07-05 23:10:02.621641 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.55s 2025-07-05 23:10:02.621651 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.53s 2025-07-05 23:10:02.621660 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.49s 2025-07-05 23:10:02.621670 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.35s 2025-07-05 23:10:02.621680 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.99s 2025-07-05 23:10:02.621689 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.72s 2025-07-05 23:10:02.621699 | orchestrator | 2025-07-05 23:10:02 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:02.621714 | orchestrator | 2025-07-05 23:10:02 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:02.621724 | orchestrator | 2025-07-05 23:10:02 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:02.621739 | orchestrator | 2025-07-05 23:10:02 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:05.640605 | orchestrator | 2025-07-05 23:10:05 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:05.640762 | orchestrator | 2025-07-05 23:10:05 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:05.641735 | orchestrator | 2025-07-05 23:10:05 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:05.642581 | orchestrator | 2025-07-05 23:10:05 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:05.643229 | orchestrator | 2025-07-05 23:10:05 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:05.644154 | orchestrator | 2025-07-05 23:10:05 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:08.672099 | orchestrator | 2025-07-05 23:10:08 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:08.672186 | orchestrator | 2025-07-05 23:10:08 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:08.675270 | orchestrator | 2025-07-05 23:10:08 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:08.676811 | orchestrator | 2025-07-05 23:10:08 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:08.678454 | orchestrator | 2025-07-05 23:10:08 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:08.678761 | orchestrator | 2025-07-05 23:10:08 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:11.722862 | orchestrator | 2025-07-05 23:10:11 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:11.724449 | orchestrator | 2025-07-05 23:10:11 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:11.726068 | orchestrator | 2025-07-05 23:10:11 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:11.727463 | orchestrator | 2025-07-05 23:10:11 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:11.728705 | orchestrator | 2025-07-05 23:10:11 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:11.728728 | orchestrator | 2025-07-05 23:10:11 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:14.766567 | orchestrator | 2025-07-05 23:10:14 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:14.767688 | orchestrator | 2025-07-05 23:10:14 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:14.768440 | orchestrator | 2025-07-05 23:10:14 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:14.769133 | orchestrator | 2025-07-05 23:10:14 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:14.769983 | orchestrator | 2025-07-05 23:10:14 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:14.770110 | orchestrator | 2025-07-05 23:10:14 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:17.807282 | orchestrator | 2025-07-05 23:10:17 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:17.809172 | orchestrator | 2025-07-05 23:10:17 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:17.810430 | orchestrator | 2025-07-05 23:10:17 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:17.813695 | orchestrator | 2025-07-05 23:10:17 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:17.816143 | orchestrator | 2025-07-05 23:10:17 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:17.816426 | orchestrator | 2025-07-05 23:10:17 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:20.850303 | orchestrator | 2025-07-05 23:10:20 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:20.850710 | orchestrator | 2025-07-05 23:10:20 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:20.852716 | orchestrator | 2025-07-05 23:10:20 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:20.853634 | orchestrator | 2025-07-05 23:10:20 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:20.854303 | orchestrator | 2025-07-05 23:10:20 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:20.854364 | orchestrator | 2025-07-05 23:10:20 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:23.895504 | orchestrator | 2025-07-05 23:10:23 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:23.895621 | orchestrator | 2025-07-05 23:10:23 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:23.897589 | orchestrator | 2025-07-05 23:10:23 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:23.898110 | orchestrator | 2025-07-05 23:10:23 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:23.898760 | orchestrator | 2025-07-05 23:10:23 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:23.898826 | orchestrator | 2025-07-05 23:10:23 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:26.920003 | orchestrator | 2025-07-05 23:10:26 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:26.920113 | orchestrator | 2025-07-05 23:10:26 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:26.923366 | orchestrator | 2025-07-05 23:10:26 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:26.923883 | orchestrator | 2025-07-05 23:10:26 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:26.924421 | orchestrator | 2025-07-05 23:10:26 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:26.924444 | orchestrator | 2025-07-05 23:10:26 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:29.958173 | orchestrator | 2025-07-05 23:10:29 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:29.958291 | orchestrator | 2025-07-05 23:10:29 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:29.958842 | orchestrator | 2025-07-05 23:10:29 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:29.959343 | orchestrator | 2025-07-05 23:10:29 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:29.960566 | orchestrator | 2025-07-05 23:10:29 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:29.960582 | orchestrator | 2025-07-05 23:10:29 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:32.982524 | orchestrator | 2025-07-05 23:10:32 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:32.983075 | orchestrator | 2025-07-05 23:10:32 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:32.984303 | orchestrator | 2025-07-05 23:10:32 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:32.985734 | orchestrator | 2025-07-05 23:10:32 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:32.988063 | orchestrator | 2025-07-05 23:10:32 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:32.988256 | orchestrator | 2025-07-05 23:10:32 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:36.020338 | orchestrator | 2025-07-05 23:10:36 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:36.020676 | orchestrator | 2025-07-05 23:10:36 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:36.021370 | orchestrator | 2025-07-05 23:10:36 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:36.022338 | orchestrator | 2025-07-05 23:10:36 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state STARTED 2025-07-05 23:10:36.023009 | orchestrator | 2025-07-05 23:10:36 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:36.023092 | orchestrator | 2025-07-05 23:10:36 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:39.066611 | orchestrator | 2025-07-05 23:10:39 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:39.067656 | orchestrator | 2025-07-05 23:10:39 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:10:39.068514 | orchestrator | 2025-07-05 23:10:39 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:39.069654 | orchestrator | 2025-07-05 23:10:39 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:39.070317 | orchestrator | 2025-07-05 23:10:39 | INFO  | Task 5fc80d78-53ba-46aa-85d3-f3a9eb2ae0d3 is in state SUCCESS 2025-07-05 23:10:39.070989 | orchestrator | 2025-07-05 23:10:39 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:39.071005 | orchestrator | 2025-07-05 23:10:39 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:42.107270 | orchestrator | 2025-07-05 23:10:42 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:42.107496 | orchestrator | 2025-07-05 23:10:42 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:10:42.108724 | orchestrator | 2025-07-05 23:10:42 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:42.111834 | orchestrator | 2025-07-05 23:10:42 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:42.112635 | orchestrator | 2025-07-05 23:10:42 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:42.112670 | orchestrator | 2025-07-05 23:10:42 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:45.148082 | orchestrator | 2025-07-05 23:10:45 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:45.148193 | orchestrator | 2025-07-05 23:10:45 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:10:45.149326 | orchestrator | 2025-07-05 23:10:45 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:45.149950 | orchestrator | 2025-07-05 23:10:45 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:45.150520 | orchestrator | 2025-07-05 23:10:45 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:45.152006 | orchestrator | 2025-07-05 23:10:45 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:48.176580 | orchestrator | 2025-07-05 23:10:48 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:48.177207 | orchestrator | 2025-07-05 23:10:48 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:10:48.177807 | orchestrator | 2025-07-05 23:10:48 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:48.179910 | orchestrator | 2025-07-05 23:10:48 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:48.179943 | orchestrator | 2025-07-05 23:10:48 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:48.179955 | orchestrator | 2025-07-05 23:10:48 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:51.220834 | orchestrator | 2025-07-05 23:10:51 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:51.220935 | orchestrator | 2025-07-05 23:10:51 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:10:51.221362 | orchestrator | 2025-07-05 23:10:51 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:51.223196 | orchestrator | 2025-07-05 23:10:51 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:51.223236 | orchestrator | 2025-07-05 23:10:51 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:51.223250 | orchestrator | 2025-07-05 23:10:51 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:54.243941 | orchestrator | 2025-07-05 23:10:54 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:54.246360 | orchestrator | 2025-07-05 23:10:54 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:10:54.246745 | orchestrator | 2025-07-05 23:10:54 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:54.247223 | orchestrator | 2025-07-05 23:10:54 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:54.247816 | orchestrator | 2025-07-05 23:10:54 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:54.247826 | orchestrator | 2025-07-05 23:10:54 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:10:57.280289 | orchestrator | 2025-07-05 23:10:57 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:10:57.280394 | orchestrator | 2025-07-05 23:10:57 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:10:57.280937 | orchestrator | 2025-07-05 23:10:57 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:10:57.282769 | orchestrator | 2025-07-05 23:10:57 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:10:57.283205 | orchestrator | 2025-07-05 23:10:57 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:10:57.283305 | orchestrator | 2025-07-05 23:10:57 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:00.322771 | orchestrator | 2025-07-05 23:11:00 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:00.323145 | orchestrator | 2025-07-05 23:11:00 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:00.324214 | orchestrator | 2025-07-05 23:11:00 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:00.326358 | orchestrator | 2025-07-05 23:11:00 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:00.327104 | orchestrator | 2025-07-05 23:11:00 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:11:00.327147 | orchestrator | 2025-07-05 23:11:00 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:03.350981 | orchestrator | 2025-07-05 23:11:03 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:03.351082 | orchestrator | 2025-07-05 23:11:03 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:03.351525 | orchestrator | 2025-07-05 23:11:03 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:03.351917 | orchestrator | 2025-07-05 23:11:03 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:03.352552 | orchestrator | 2025-07-05 23:11:03 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:11:03.352580 | orchestrator | 2025-07-05 23:11:03 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:06.379432 | orchestrator | 2025-07-05 23:11:06 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:06.379536 | orchestrator | 2025-07-05 23:11:06 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:06.379944 | orchestrator | 2025-07-05 23:11:06 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:06.380940 | orchestrator | 2025-07-05 23:11:06 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:06.383681 | orchestrator | 2025-07-05 23:11:06 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:11:06.383728 | orchestrator | 2025-07-05 23:11:06 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:09.432532 | orchestrator | 2025-07-05 23:11:09 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:09.432620 | orchestrator | 2025-07-05 23:11:09 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:09.433343 | orchestrator | 2025-07-05 23:11:09 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:09.434205 | orchestrator | 2025-07-05 23:11:09 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:09.435560 | orchestrator | 2025-07-05 23:11:09 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:11:09.435585 | orchestrator | 2025-07-05 23:11:09 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:12.473758 | orchestrator | 2025-07-05 23:11:12 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:12.474121 | orchestrator | 2025-07-05 23:11:12 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:12.474743 | orchestrator | 2025-07-05 23:11:12 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:12.475468 | orchestrator | 2025-07-05 23:11:12 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:12.476091 | orchestrator | 2025-07-05 23:11:12 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:11:12.476223 | orchestrator | 2025-07-05 23:11:12 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:15.502883 | orchestrator | 2025-07-05 23:11:15 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:15.503181 | orchestrator | 2025-07-05 23:11:15 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:15.504111 | orchestrator | 2025-07-05 23:11:15 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:15.504609 | orchestrator | 2025-07-05 23:11:15 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:15.505140 | orchestrator | 2025-07-05 23:11:15 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:11:15.505167 | orchestrator | 2025-07-05 23:11:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:18.534318 | orchestrator | 2025-07-05 23:11:18 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:18.534430 | orchestrator | 2025-07-05 23:11:18 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:18.534916 | orchestrator | 2025-07-05 23:11:18 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:18.536451 | orchestrator | 2025-07-05 23:11:18 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:18.537116 | orchestrator | 2025-07-05 23:11:18 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:11:18.537148 | orchestrator | 2025-07-05 23:11:18 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:21.555692 | orchestrator | 2025-07-05 23:11:21 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:21.557174 | orchestrator | 2025-07-05 23:11:21 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:21.557208 | orchestrator | 2025-07-05 23:11:21 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:21.557222 | orchestrator | 2025-07-05 23:11:21 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:21.557462 | orchestrator | 2025-07-05 23:11:21 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:11:21.557482 | orchestrator | 2025-07-05 23:11:21 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:24.577453 | orchestrator | 2025-07-05 23:11:24 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:24.578306 | orchestrator | 2025-07-05 23:11:24 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:24.578779 | orchestrator | 2025-07-05 23:11:24 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:24.579370 | orchestrator | 2025-07-05 23:11:24 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:24.579876 | orchestrator | 2025-07-05 23:11:24 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state STARTED 2025-07-05 23:11:24.580013 | orchestrator | 2025-07-05 23:11:24 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:27.605826 | orchestrator | 2025-07-05 23:11:27 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:27.607095 | orchestrator | 2025-07-05 23:11:27 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:27.607576 | orchestrator | 2025-07-05 23:11:27 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:27.608115 | orchestrator | 2025-07-05 23:11:27 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:27.608713 | orchestrator | 2025-07-05 23:11:27 | INFO  | Task 337e608f-a4a8-4736-9a27-9ca40557db5c is in state SUCCESS 2025-07-05 23:11:27.609123 | orchestrator | 2025-07-05 23:11:27.609146 | orchestrator | 2025-07-05 23:11:27.609159 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:11:27.609172 | orchestrator | 2025-07-05 23:11:27.609184 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:11:27.609196 | orchestrator | Saturday 05 July 2025 23:10:02 +0000 (0:00:00.410) 0:00:00.410 ********* 2025-07-05 23:11:27.609207 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:11:27.609219 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:11:27.609231 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:11:27.609242 | orchestrator | ok: [testbed-manager] 2025-07-05 23:11:27.609253 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:11:27.609264 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:11:27.609276 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:11:27.609287 | orchestrator | 2025-07-05 23:11:27.609298 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:11:27.609309 | orchestrator | Saturday 05 July 2025 23:10:03 +0000 (0:00:00.881) 0:00:01.292 ********* 2025-07-05 23:11:27.609321 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-07-05 23:11:27.609332 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-07-05 23:11:27.609343 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-07-05 23:11:27.609355 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-07-05 23:11:27.609366 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-07-05 23:11:27.609393 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-07-05 23:11:27.609405 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-07-05 23:11:27.609416 | orchestrator | 2025-07-05 23:11:27.609428 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-05 23:11:27.609439 | orchestrator | 2025-07-05 23:11:27.609450 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-07-05 23:11:27.609461 | orchestrator | Saturday 05 July 2025 23:10:04 +0000 (0:00:01.280) 0:00:02.572 ********* 2025-07-05 23:11:27.609473 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:11:27.609486 | orchestrator | 2025-07-05 23:11:27.609498 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-07-05 23:11:27.609509 | orchestrator | Saturday 05 July 2025 23:10:06 +0000 (0:00:01.559) 0:00:04.132 ********* 2025-07-05 23:11:27.609520 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-07-05 23:11:27.609531 | orchestrator | 2025-07-05 23:11:27.609542 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-07-05 23:11:27.609554 | orchestrator | Saturday 05 July 2025 23:10:10 +0000 (0:00:04.173) 0:00:08.305 ********* 2025-07-05 23:11:27.609566 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-07-05 23:11:27.609578 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-07-05 23:11:27.609590 | orchestrator | 2025-07-05 23:11:27.609601 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-07-05 23:11:27.609612 | orchestrator | Saturday 05 July 2025 23:10:17 +0000 (0:00:06.671) 0:00:14.977 ********* 2025-07-05 23:11:27.609623 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-05 23:11:27.609634 | orchestrator | 2025-07-05 23:11:27.609645 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-07-05 23:11:27.609657 | orchestrator | Saturday 05 July 2025 23:10:20 +0000 (0:00:03.354) 0:00:18.331 ********* 2025-07-05 23:11:27.609683 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-05 23:11:27.609694 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-07-05 23:11:27.609705 | orchestrator | 2025-07-05 23:11:27.609717 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-07-05 23:11:27.609730 | orchestrator | Saturday 05 July 2025 23:10:24 +0000 (0:00:04.104) 0:00:22.436 ********* 2025-07-05 23:11:27.609742 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-05 23:11:27.609754 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-07-05 23:11:27.609767 | orchestrator | 2025-07-05 23:11:27.609780 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-07-05 23:11:27.609819 | orchestrator | Saturday 05 July 2025 23:10:30 +0000 (0:00:06.382) 0:00:28.818 ********* 2025-07-05 23:11:27.609833 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-07-05 23:11:27.609845 | orchestrator | 2025-07-05 23:11:27.609856 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:11:27.609868 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:11:27.609880 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:11:27.609892 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:11:27.609903 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:11:27.609915 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:11:27.609937 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:11:27.609949 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:11:27.609960 | orchestrator | 2025-07-05 23:11:27.609972 | orchestrator | 2025-07-05 23:11:27.609983 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:11:27.609994 | orchestrator | Saturday 05 July 2025 23:10:36 +0000 (0:00:05.445) 0:00:34.264 ********* 2025-07-05 23:11:27.610006 | orchestrator | =============================================================================== 2025-07-05 23:11:27.610062 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.67s 2025-07-05 23:11:27.610077 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.38s 2025-07-05 23:11:27.610089 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.45s 2025-07-05 23:11:27.610100 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.17s 2025-07-05 23:11:27.610111 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.10s 2025-07-05 23:11:27.610122 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.35s 2025-07-05 23:11:27.610140 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.56s 2025-07-05 23:11:27.610151 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.28s 2025-07-05 23:11:27.610163 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2025-07-05 23:11:27.610174 | orchestrator | 2025-07-05 23:11:27.610185 | orchestrator | 2025-07-05 23:11:27.610196 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-07-05 23:11:27.610207 | orchestrator | 2025-07-05 23:11:27.610219 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-07-05 23:11:27.610230 | orchestrator | Saturday 05 July 2025 23:09:54 +0000 (0:00:00.271) 0:00:00.271 ********* 2025-07-05 23:11:27.610249 | orchestrator | changed: [testbed-manager] 2025-07-05 23:11:27.610260 | orchestrator | 2025-07-05 23:11:27.610272 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-07-05 23:11:27.610283 | orchestrator | Saturday 05 July 2025 23:09:56 +0000 (0:00:01.543) 0:00:01.815 ********* 2025-07-05 23:11:27.610295 | orchestrator | changed: [testbed-manager] 2025-07-05 23:11:27.610306 | orchestrator | 2025-07-05 23:11:27.610317 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-07-05 23:11:27.610328 | orchestrator | Saturday 05 July 2025 23:09:57 +0000 (0:00:01.098) 0:00:02.914 ********* 2025-07-05 23:11:27.610340 | orchestrator | changed: [testbed-manager] 2025-07-05 23:11:27.610351 | orchestrator | 2025-07-05 23:11:27.610362 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-07-05 23:11:27.610374 | orchestrator | Saturday 05 July 2025 23:09:58 +0000 (0:00:01.028) 0:00:03.943 ********* 2025-07-05 23:11:27.610385 | orchestrator | changed: [testbed-manager] 2025-07-05 23:11:27.610396 | orchestrator | 2025-07-05 23:11:27.610407 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-07-05 23:11:27.610419 | orchestrator | Saturday 05 July 2025 23:09:59 +0000 (0:00:01.234) 0:00:05.177 ********* 2025-07-05 23:11:27.610430 | orchestrator | changed: [testbed-manager] 2025-07-05 23:11:27.610441 | orchestrator | 2025-07-05 23:11:27.610452 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-07-05 23:11:27.610464 | orchestrator | Saturday 05 July 2025 23:10:00 +0000 (0:00:01.242) 0:00:06.420 ********* 2025-07-05 23:11:27.610475 | orchestrator | changed: [testbed-manager] 2025-07-05 23:11:27.610486 | orchestrator | 2025-07-05 23:11:27.610497 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-07-05 23:11:27.610508 | orchestrator | Saturday 05 July 2025 23:10:01 +0000 (0:00:00.951) 0:00:07.371 ********* 2025-07-05 23:11:27.610520 | orchestrator | changed: [testbed-manager] 2025-07-05 23:11:27.610531 | orchestrator | 2025-07-05 23:11:27.610542 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-07-05 23:11:27.610554 | orchestrator | Saturday 05 July 2025 23:10:03 +0000 (0:00:01.135) 0:00:08.506 ********* 2025-07-05 23:11:27.610565 | orchestrator | changed: [testbed-manager] 2025-07-05 23:11:27.610576 | orchestrator | 2025-07-05 23:11:27.610587 | orchestrator | TASK [Create admin user] ******************************************************* 2025-07-05 23:11:27.610599 | orchestrator | Saturday 05 July 2025 23:10:03 +0000 (0:00:00.962) 0:00:09.469 ********* 2025-07-05 23:11:27.610610 | orchestrator | changed: [testbed-manager] 2025-07-05 23:11:27.610621 | orchestrator | 2025-07-05 23:11:27.610633 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-07-05 23:11:27.610644 | orchestrator | Saturday 05 July 2025 23:11:00 +0000 (0:00:56.486) 0:01:05.956 ********* 2025-07-05 23:11:27.610655 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:11:27.610667 | orchestrator | 2025-07-05 23:11:27.610678 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-05 23:11:27.610689 | orchestrator | 2025-07-05 23:11:27.610701 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-05 23:11:27.610712 | orchestrator | Saturday 05 July 2025 23:11:00 +0000 (0:00:00.137) 0:01:06.094 ********* 2025-07-05 23:11:27.610723 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:11:27.610734 | orchestrator | 2025-07-05 23:11:27.610746 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-05 23:11:27.610757 | orchestrator | 2025-07-05 23:11:27.610768 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-05 23:11:27.610779 | orchestrator | Saturday 05 July 2025 23:11:12 +0000 (0:00:11.561) 0:01:17.655 ********* 2025-07-05 23:11:27.610807 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:11:27.610819 | orchestrator | 2025-07-05 23:11:27.610830 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-05 23:11:27.610841 | orchestrator | 2025-07-05 23:11:27.610852 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-05 23:11:27.610871 | orchestrator | Saturday 05 July 2025 23:11:23 +0000 (0:00:11.334) 0:01:28.990 ********* 2025-07-05 23:11:27.610882 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:11:27.610893 | orchestrator | 2025-07-05 23:11:27.610912 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:11:27.610924 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-05 23:11:27.610936 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:11:27.610947 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:11:27.610958 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:11:27.610970 | orchestrator | 2025-07-05 23:11:27.610981 | orchestrator | 2025-07-05 23:11:27.610992 | orchestrator | 2025-07-05 23:11:27.611003 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:11:27.611014 | orchestrator | Saturday 05 July 2025 23:11:24 +0000 (0:00:01.234) 0:01:30.224 ********* 2025-07-05 23:11:27.611025 | orchestrator | =============================================================================== 2025-07-05 23:11:27.611041 | orchestrator | Create admin user ------------------------------------------------------ 56.49s 2025-07-05 23:11:27.611053 | orchestrator | Restart ceph manager service ------------------------------------------- 24.13s 2025-07-05 23:11:27.611064 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.54s 2025-07-05 23:11:27.611075 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.24s 2025-07-05 23:11:27.611086 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.23s 2025-07-05 23:11:27.611097 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.14s 2025-07-05 23:11:27.611108 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.10s 2025-07-05 23:11:27.611119 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.03s 2025-07-05 23:11:27.611130 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.96s 2025-07-05 23:11:27.611141 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.95s 2025-07-05 23:11:27.611152 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-07-05 23:11:27.611163 | orchestrator | 2025-07-05 23:11:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:30.630953 | orchestrator | 2025-07-05 23:11:30 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:30.631056 | orchestrator | 2025-07-05 23:11:30 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:30.631651 | orchestrator | 2025-07-05 23:11:30 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:30.632034 | orchestrator | 2025-07-05 23:11:30 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:30.632159 | orchestrator | 2025-07-05 23:11:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:33.662674 | orchestrator | 2025-07-05 23:11:33 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:33.662777 | orchestrator | 2025-07-05 23:11:33 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:33.663243 | orchestrator | 2025-07-05 23:11:33 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:33.664008 | orchestrator | 2025-07-05 23:11:33 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:33.664063 | orchestrator | 2025-07-05 23:11:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:36.697065 | orchestrator | 2025-07-05 23:11:36 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:36.699531 | orchestrator | 2025-07-05 23:11:36 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:36.700324 | orchestrator | 2025-07-05 23:11:36 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:36.700953 | orchestrator | 2025-07-05 23:11:36 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:36.701056 | orchestrator | 2025-07-05 23:11:36 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:39.729697 | orchestrator | 2025-07-05 23:11:39 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:39.731247 | orchestrator | 2025-07-05 23:11:39 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:39.732518 | orchestrator | 2025-07-05 23:11:39 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:39.733355 | orchestrator | 2025-07-05 23:11:39 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:39.733470 | orchestrator | 2025-07-05 23:11:39 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:42.773616 | orchestrator | 2025-07-05 23:11:42 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:42.774319 | orchestrator | 2025-07-05 23:11:42 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:42.776354 | orchestrator | 2025-07-05 23:11:42 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:42.778811 | orchestrator | 2025-07-05 23:11:42 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:42.779635 | orchestrator | 2025-07-05 23:11:42 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:45.817357 | orchestrator | 2025-07-05 23:11:45 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:45.818183 | orchestrator | 2025-07-05 23:11:45 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:45.820418 | orchestrator | 2025-07-05 23:11:45 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:45.822260 | orchestrator | 2025-07-05 23:11:45 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:45.822378 | orchestrator | 2025-07-05 23:11:45 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:48.860091 | orchestrator | 2025-07-05 23:11:48 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:48.860194 | orchestrator | 2025-07-05 23:11:48 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:48.860313 | orchestrator | 2025-07-05 23:11:48 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:48.863848 | orchestrator | 2025-07-05 23:11:48 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:48.863877 | orchestrator | 2025-07-05 23:11:48 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:51.892215 | orchestrator | 2025-07-05 23:11:51 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:51.893554 | orchestrator | 2025-07-05 23:11:51 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:51.893954 | orchestrator | 2025-07-05 23:11:51 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:51.894683 | orchestrator | 2025-07-05 23:11:51 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:51.894711 | orchestrator | 2025-07-05 23:11:51 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:54.965028 | orchestrator | 2025-07-05 23:11:54 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:54.965131 | orchestrator | 2025-07-05 23:11:54 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:54.965158 | orchestrator | 2025-07-05 23:11:54 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:54.975506 | orchestrator | 2025-07-05 23:11:54 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:54.975583 | orchestrator | 2025-07-05 23:11:54 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:11:58.017102 | orchestrator | 2025-07-05 23:11:58 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:11:58.017193 | orchestrator | 2025-07-05 23:11:58 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:11:58.017621 | orchestrator | 2025-07-05 23:11:58 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:11:58.019085 | orchestrator | 2025-07-05 23:11:58 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:11:58.019122 | orchestrator | 2025-07-05 23:11:58 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:01.065435 | orchestrator | 2025-07-05 23:12:01 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:01.066649 | orchestrator | 2025-07-05 23:12:01 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:01.068193 | orchestrator | 2025-07-05 23:12:01 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:01.069900 | orchestrator | 2025-07-05 23:12:01 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:01.069942 | orchestrator | 2025-07-05 23:12:01 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:04.129611 | orchestrator | 2025-07-05 23:12:04 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:04.131216 | orchestrator | 2025-07-05 23:12:04 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:04.132568 | orchestrator | 2025-07-05 23:12:04 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:04.134289 | orchestrator | 2025-07-05 23:12:04 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:04.134323 | orchestrator | 2025-07-05 23:12:04 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:07.187401 | orchestrator | 2025-07-05 23:12:07 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:07.187522 | orchestrator | 2025-07-05 23:12:07 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:07.187558 | orchestrator | 2025-07-05 23:12:07 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:07.189179 | orchestrator | 2025-07-05 23:12:07 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:07.189533 | orchestrator | 2025-07-05 23:12:07 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:10.229298 | orchestrator | 2025-07-05 23:12:10 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:10.230836 | orchestrator | 2025-07-05 23:12:10 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:10.232183 | orchestrator | 2025-07-05 23:12:10 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:10.233450 | orchestrator | 2025-07-05 23:12:10 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:10.233484 | orchestrator | 2025-07-05 23:12:10 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:13.281038 | orchestrator | 2025-07-05 23:12:13 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:13.284197 | orchestrator | 2025-07-05 23:12:13 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:13.286202 | orchestrator | 2025-07-05 23:12:13 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:13.288569 | orchestrator | 2025-07-05 23:12:13 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:13.288607 | orchestrator | 2025-07-05 23:12:13 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:16.335722 | orchestrator | 2025-07-05 23:12:16 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:16.336715 | orchestrator | 2025-07-05 23:12:16 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:16.341156 | orchestrator | 2025-07-05 23:12:16 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:16.342309 | orchestrator | 2025-07-05 23:12:16 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:16.342703 | orchestrator | 2025-07-05 23:12:16 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:19.399012 | orchestrator | 2025-07-05 23:12:19 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:19.403383 | orchestrator | 2025-07-05 23:12:19 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:19.407178 | orchestrator | 2025-07-05 23:12:19 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:19.412347 | orchestrator | 2025-07-05 23:12:19 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:19.412450 | orchestrator | 2025-07-05 23:12:19 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:22.458395 | orchestrator | 2025-07-05 23:12:22 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:22.459048 | orchestrator | 2025-07-05 23:12:22 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:22.460081 | orchestrator | 2025-07-05 23:12:22 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:22.460972 | orchestrator | 2025-07-05 23:12:22 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:22.461274 | orchestrator | 2025-07-05 23:12:22 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:25.545980 | orchestrator | 2025-07-05 23:12:25 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:25.546163 | orchestrator | 2025-07-05 23:12:25 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:25.546536 | orchestrator | 2025-07-05 23:12:25 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:25.547273 | orchestrator | 2025-07-05 23:12:25 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:25.548278 | orchestrator | 2025-07-05 23:12:25 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:28.571959 | orchestrator | 2025-07-05 23:12:28 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:28.572133 | orchestrator | 2025-07-05 23:12:28 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:28.572995 | orchestrator | 2025-07-05 23:12:28 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:28.574903 | orchestrator | 2025-07-05 23:12:28 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:28.574952 | orchestrator | 2025-07-05 23:12:28 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:31.608838 | orchestrator | 2025-07-05 23:12:31 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:31.609064 | orchestrator | 2025-07-05 23:12:31 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:31.609747 | orchestrator | 2025-07-05 23:12:31 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:31.610392 | orchestrator | 2025-07-05 23:12:31 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:31.611003 | orchestrator | 2025-07-05 23:12:31 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:34.643912 | orchestrator | 2025-07-05 23:12:34 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:34.644090 | orchestrator | 2025-07-05 23:12:34 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:34.645131 | orchestrator | 2025-07-05 23:12:34 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:34.648055 | orchestrator | 2025-07-05 23:12:34 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:34.648091 | orchestrator | 2025-07-05 23:12:34 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:37.691243 | orchestrator | 2025-07-05 23:12:37 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:37.691462 | orchestrator | 2025-07-05 23:12:37 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:37.692036 | orchestrator | 2025-07-05 23:12:37 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:37.692564 | orchestrator | 2025-07-05 23:12:37 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:37.692712 | orchestrator | 2025-07-05 23:12:37 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:40.717508 | orchestrator | 2025-07-05 23:12:40 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:40.718534 | orchestrator | 2025-07-05 23:12:40 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:40.719135 | orchestrator | 2025-07-05 23:12:40 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:40.719987 | orchestrator | 2025-07-05 23:12:40 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:40.720024 | orchestrator | 2025-07-05 23:12:40 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:43.759031 | orchestrator | 2025-07-05 23:12:43 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:43.760642 | orchestrator | 2025-07-05 23:12:43 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:43.764722 | orchestrator | 2025-07-05 23:12:43 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:43.765300 | orchestrator | 2025-07-05 23:12:43 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:43.765410 | orchestrator | 2025-07-05 23:12:43 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:46.806460 | orchestrator | 2025-07-05 23:12:46 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:46.809193 | orchestrator | 2025-07-05 23:12:46 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:46.811078 | orchestrator | 2025-07-05 23:12:46 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:46.813821 | orchestrator | 2025-07-05 23:12:46 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:46.813970 | orchestrator | 2025-07-05 23:12:46 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:49.869347 | orchestrator | 2025-07-05 23:12:49 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:49.871622 | orchestrator | 2025-07-05 23:12:49 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:49.873628 | orchestrator | 2025-07-05 23:12:49 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:49.875028 | orchestrator | 2025-07-05 23:12:49 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:49.875067 | orchestrator | 2025-07-05 23:12:49 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:52.917272 | orchestrator | 2025-07-05 23:12:52 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:52.918916 | orchestrator | 2025-07-05 23:12:52 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:52.919504 | orchestrator | 2025-07-05 23:12:52 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:52.920290 | orchestrator | 2025-07-05 23:12:52 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:52.920414 | orchestrator | 2025-07-05 23:12:52 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:55.964621 | orchestrator | 2025-07-05 23:12:55 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state STARTED 2025-07-05 23:12:55.966173 | orchestrator | 2025-07-05 23:12:55 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:55.968024 | orchestrator | 2025-07-05 23:12:55 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:55.969804 | orchestrator | 2025-07-05 23:12:55 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:55.969848 | orchestrator | 2025-07-05 23:12:55 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:12:59.019557 | orchestrator | 2025-07-05 23:12:59 | INFO  | Task ef16452b-5ad6-4b57-bd5e-b95c372c5fef is in state SUCCESS 2025-07-05 23:12:59.021163 | orchestrator | 2025-07-05 23:12:59.021204 | orchestrator | 2025-07-05 23:12:59.021216 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:12:59.021228 | orchestrator | 2025-07-05 23:12:59.021238 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:12:59.021249 | orchestrator | Saturday 05 July 2025 23:10:02 +0000 (0:00:00.302) 0:00:00.302 ********* 2025-07-05 23:12:59.021259 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:12:59.021271 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:12:59.021281 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:12:59.021291 | orchestrator | 2025-07-05 23:12:59.021302 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:12:59.021338 | orchestrator | Saturday 05 July 2025 23:10:02 +0000 (0:00:00.332) 0:00:00.635 ********* 2025-07-05 23:12:59.021349 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-07-05 23:12:59.021359 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-07-05 23:12:59.021369 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-07-05 23:12:59.021379 | orchestrator | 2025-07-05 23:12:59.021389 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-07-05 23:12:59.021399 | orchestrator | 2025-07-05 23:12:59.021409 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-05 23:12:59.021420 | orchestrator | Saturday 05 July 2025 23:10:02 +0000 (0:00:00.457) 0:00:01.092 ********* 2025-07-05 23:12:59.021429 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:12:59.021440 | orchestrator | 2025-07-05 23:12:59.021450 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-07-05 23:12:59.021460 | orchestrator | Saturday 05 July 2025 23:10:03 +0000 (0:00:00.652) 0:00:01.745 ********* 2025-07-05 23:12:59.021470 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-07-05 23:12:59.021480 | orchestrator | 2025-07-05 23:12:59.021490 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-07-05 23:12:59.021500 | orchestrator | Saturday 05 July 2025 23:10:08 +0000 (0:00:04.411) 0:00:06.157 ********* 2025-07-05 23:12:59.021510 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-07-05 23:12:59.021521 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-07-05 23:12:59.021531 | orchestrator | 2025-07-05 23:12:59.021541 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-07-05 23:12:59.021551 | orchestrator | Saturday 05 July 2025 23:10:14 +0000 (0:00:06.658) 0:00:12.815 ********* 2025-07-05 23:12:59.021561 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-07-05 23:12:59.021571 | orchestrator | 2025-07-05 23:12:59.021581 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-07-05 23:12:59.021591 | orchestrator | Saturday 05 July 2025 23:10:18 +0000 (0:00:03.948) 0:00:16.764 ********* 2025-07-05 23:12:59.021602 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-05 23:12:59.021612 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-07-05 23:12:59.021622 | orchestrator | 2025-07-05 23:12:59.021632 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-07-05 23:12:59.021641 | orchestrator | Saturday 05 July 2025 23:10:22 +0000 (0:00:04.129) 0:00:20.893 ********* 2025-07-05 23:12:59.021651 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-05 23:12:59.021661 | orchestrator | 2025-07-05 23:12:59.021671 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-07-05 23:12:59.021681 | orchestrator | Saturday 05 July 2025 23:10:26 +0000 (0:00:03.384) 0:00:24.277 ********* 2025-07-05 23:12:59.021691 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-07-05 23:12:59.021701 | orchestrator | 2025-07-05 23:12:59.021711 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-07-05 23:12:59.021735 | orchestrator | Saturday 05 July 2025 23:10:30 +0000 (0:00:04.347) 0:00:28.624 ********* 2025-07-05 23:12:59.021820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.021889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.021913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.021933 | orchestrator | 2025-07-05 23:12:59.021945 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-05 23:12:59.021956 | orchestrator | Saturday 05 July 2025 23:10:33 +0000 (0:00:02.805) 0:00:31.430 ********* 2025-07-05 23:12:59.021967 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:12:59.021979 | orchestrator | 2025-07-05 23:12:59.021998 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-07-05 23:12:59.022011 | orchestrator | Saturday 05 July 2025 23:10:33 +0000 (0:00:00.515) 0:00:31.945 ********* 2025-07-05 23:12:59.022074 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:12:59.022097 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:12:59.022109 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:12:59.022121 | orchestrator | 2025-07-05 23:12:59.022131 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-07-05 23:12:59.022141 | orchestrator | Saturday 05 July 2025 23:10:37 +0000 (0:00:03.366) 0:00:35.312 ********* 2025-07-05 23:12:59.022151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-05 23:12:59.022161 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-05 23:12:59.022171 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-05 23:12:59.022181 | orchestrator | 2025-07-05 23:12:59.022191 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-07-05 23:12:59.022201 | orchestrator | Saturday 05 July 2025 23:10:39 +0000 (0:00:01.831) 0:00:37.143 ********* 2025-07-05 23:12:59.022211 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-05 23:12:59.022221 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-05 23:12:59.022231 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-05 23:12:59.022241 | orchestrator | 2025-07-05 23:12:59.022251 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-07-05 23:12:59.022261 | orchestrator | Saturday 05 July 2025 23:10:40 +0000 (0:00:01.305) 0:00:38.449 ********* 2025-07-05 23:12:59.022271 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:12:59.022281 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:12:59.022291 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:12:59.022301 | orchestrator | 2025-07-05 23:12:59.022311 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-07-05 23:12:59.022321 | orchestrator | Saturday 05 July 2025 23:10:41 +0000 (0:00:00.915) 0:00:39.365 ********* 2025-07-05 23:12:59.022331 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.022340 | orchestrator | 2025-07-05 23:12:59.022350 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-07-05 23:12:59.022360 | orchestrator | Saturday 05 July 2025 23:10:41 +0000 (0:00:00.205) 0:00:39.571 ********* 2025-07-05 23:12:59.022370 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.022380 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.022390 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.022400 | orchestrator | 2025-07-05 23:12:59.022409 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-05 23:12:59.022419 | orchestrator | Saturday 05 July 2025 23:10:42 +0000 (0:00:00.580) 0:00:40.151 ********* 2025-07-05 23:12:59.022436 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:12:59.022446 | orchestrator | 2025-07-05 23:12:59.022456 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-07-05 23:12:59.022466 | orchestrator | Saturday 05 July 2025 23:10:43 +0000 (0:00:01.516) 0:00:41.667 ********* 2025-07-05 23:12:59.022490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.022503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.022520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.022537 | orchestrator | 2025-07-05 23:12:59.022548 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-07-05 23:12:59.022558 | orchestrator | Saturday 05 July 2025 23:10:49 +0000 (0:00:05.759) 0:00:47.427 ********* 2025-07-05 23:12:59.022577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-05 23:12:59.022589 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.022604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-05 23:12:59.022622 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.022639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-05 23:12:59.022651 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.022661 | orchestrator | 2025-07-05 23:12:59.022670 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-07-05 23:12:59.022680 | orchestrator | Saturday 05 July 2025 23:10:52 +0000 (0:00:03.255) 0:00:50.682 ********* 2025-07-05 23:12:59.022692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-05 23:12:59.022708 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.022730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-05 23:12:59.022741 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.022752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-05 23:12:59.022796 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.022806 | orchestrator | 2025-07-05 23:12:59.022816 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-07-05 23:12:59.022826 | orchestrator | Saturday 05 July 2025 23:10:55 +0000 (0:00:03.175) 0:00:53.858 ********* 2025-07-05 23:12:59.022836 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.022846 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.022856 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.022865 | orchestrator | 2025-07-05 23:12:59.022875 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-07-05 23:12:59.022885 | orchestrator | Saturday 05 July 2025 23:11:00 +0000 (0:00:05.036) 0:00:58.894 ********* 2025-07-05 23:12:59.022906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.022918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.022937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.022948 | orchestrator | 2025-07-05 23:12:59.022958 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-07-05 23:12:59.022968 | orchestrator | Saturday 05 July 2025 23:11:06 +0000 (0:00:05.291) 0:01:04.191 ********* 2025-07-05 23:12:59.022978 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:12:59.022988 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:12:59.022998 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:12:59.023008 | orchestrator | 2025-07-05 23:12:59.023017 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-07-05 23:12:59.023027 | orchestrator | Saturday 05 July 2025 23:11:15 +0000 (0:00:09.882) 0:01:14.073 ********* 2025-07-05 23:12:59.023037 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.023047 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.023057 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.023067 | orchestrator | 2025-07-05 23:12:59.023077 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-07-05 23:12:59.023092 | orchestrator | Saturday 05 July 2025 23:11:21 +0000 (0:00:05.533) 0:01:19.607 ********* 2025-07-05 23:12:59.023103 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.023158 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.023171 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.023181 | orchestrator | 2025-07-05 23:12:59.023191 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-07-05 23:12:59.023201 | orchestrator | Saturday 05 July 2025 23:11:26 +0000 (0:00:04.888) 0:01:24.496 ********* 2025-07-05 23:12:59.023211 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.023228 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.023238 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.023248 | orchestrator | 2025-07-05 23:12:59.023257 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-07-05 23:12:59.023268 | orchestrator | Saturday 05 July 2025 23:11:30 +0000 (0:00:03.891) 0:01:28.387 ********* 2025-07-05 23:12:59.023277 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.023287 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.023297 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.023306 | orchestrator | 2025-07-05 23:12:59.023316 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-07-05 23:12:59.023326 | orchestrator | Saturday 05 July 2025 23:11:33 +0000 (0:00:03.199) 0:01:31.586 ********* 2025-07-05 23:12:59.023420 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.023445 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.023455 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.023465 | orchestrator | 2025-07-05 23:12:59.023475 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-07-05 23:12:59.023485 | orchestrator | Saturday 05 July 2025 23:11:33 +0000 (0:00:00.314) 0:01:31.900 ********* 2025-07-05 23:12:59.023495 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-05 23:12:59.023505 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.023515 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-05 23:12:59.023525 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.023535 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-05 23:12:59.023545 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.023555 | orchestrator | 2025-07-05 23:12:59.023565 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-07-05 23:12:59.023575 | orchestrator | Saturday 05 July 2025 23:11:37 +0000 (0:00:03.375) 0:01:35.276 ********* 2025-07-05 23:12:59.023590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.023615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.023638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-05 23:12:59.023650 | orchestrator | 2025-07-05 23:12:59.023660 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-05 23:12:59.023670 | orchestrator | Saturday 05 July 2025 23:11:40 +0000 (0:00:03.469) 0:01:38.746 ********* 2025-07-05 23:12:59.023680 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:12:59.023690 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:12:59.023699 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:12:59.023709 | orchestrator | 2025-07-05 23:12:59.023719 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-07-05 23:12:59.023729 | orchestrator | Saturday 05 July 2025 23:11:40 +0000 (0:00:00.248) 0:01:38.994 ********* 2025-07-05 23:12:59.023739 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:12:59.023749 | orchestrator | 2025-07-05 23:12:59.023775 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-07-05 23:12:59.023791 | orchestrator | Saturday 05 July 2025 23:11:43 +0000 (0:00:02.222) 0:01:41.216 ********* 2025-07-05 23:12:59.023801 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:12:59.023811 | orchestrator | 2025-07-05 23:12:59.023821 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-07-05 23:12:59.023831 | orchestrator | Saturday 05 July 2025 23:11:45 +0000 (0:00:02.114) 0:01:43.330 ********* 2025-07-05 23:12:59.023841 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:12:59.023851 | orchestrator | 2025-07-05 23:12:59.023861 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-07-05 23:12:59.023871 | orchestrator | Saturday 05 July 2025 23:11:47 +0000 (0:00:01.990) 0:01:45.321 ********* 2025-07-05 23:12:59.023881 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:12:59.023890 | orchestrator | 2025-07-05 23:12:59.023900 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-07-05 23:12:59.023910 | orchestrator | Saturday 05 July 2025 23:12:16 +0000 (0:00:28.928) 0:02:14.249 ********* 2025-07-05 23:12:59.023920 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:12:59.023930 | orchestrator | 2025-07-05 23:12:59.023946 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-05 23:12:59.023956 | orchestrator | Saturday 05 July 2025 23:12:19 +0000 (0:00:03.110) 0:02:17.359 ********* 2025-07-05 23:12:59.023966 | orchestrator | 2025-07-05 23:12:59.023976 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-05 23:12:59.023986 | orchestrator | Saturday 05 July 2025 23:12:19 +0000 (0:00:00.187) 0:02:17.547 ********* 2025-07-05 23:12:59.023996 | orchestrator | 2025-07-05 23:12:59.024006 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-05 23:12:59.024015 | orchestrator | Saturday 05 July 2025 23:12:19 +0000 (0:00:00.206) 0:02:17.754 ********* 2025-07-05 23:12:59.024025 | orchestrator | 2025-07-05 23:12:59.024035 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-07-05 23:12:59.024045 | orchestrator | Saturday 05 July 2025 23:12:19 +0000 (0:00:00.157) 0:02:17.911 ********* 2025-07-05 23:12:59.024055 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:12:59.024064 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:12:59.024074 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:12:59.024084 | orchestrator | 2025-07-05 23:12:59.024094 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:12:59.024105 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-05 23:12:59.024116 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-05 23:12:59.024125 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-05 23:12:59.024135 | orchestrator | 2025-07-05 23:12:59.024145 | orchestrator | 2025-07-05 23:12:59.024155 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:12:59.024165 | orchestrator | Saturday 05 July 2025 23:12:57 +0000 (0:00:38.204) 0:02:56.116 ********* 2025-07-05 23:12:59.024175 | orchestrator | =============================================================================== 2025-07-05 23:12:59.024185 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.20s 2025-07-05 23:12:59.024195 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.93s 2025-07-05 23:12:59.024205 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.88s 2025-07-05 23:12:59.024214 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.66s 2025-07-05 23:12:59.024224 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.76s 2025-07-05 23:12:59.024234 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.53s 2025-07-05 23:12:59.024249 | orchestrator | glance : Copying over config.json files for services -------------------- 5.30s 2025-07-05 23:12:59.024259 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.04s 2025-07-05 23:12:59.024269 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.89s 2025-07-05 23:12:59.024278 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.41s 2025-07-05 23:12:59.024288 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.35s 2025-07-05 23:12:59.024298 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.13s 2025-07-05 23:12:59.024308 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.95s 2025-07-05 23:12:59.024318 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.89s 2025-07-05 23:12:59.024331 | orchestrator | glance : Check glance containers ---------------------------------------- 3.47s 2025-07-05 23:12:59.024342 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.38s 2025-07-05 23:12:59.024351 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.38s 2025-07-05 23:12:59.024361 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.37s 2025-07-05 23:12:59.024371 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.26s 2025-07-05 23:12:59.024381 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.20s 2025-07-05 23:12:59.024391 | orchestrator | 2025-07-05 23:12:59 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:12:59.024401 | orchestrator | 2025-07-05 23:12:59 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:12:59.024503 | orchestrator | 2025-07-05 23:12:59 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:12:59.024516 | orchestrator | 2025-07-05 23:12:59 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:02.082732 | orchestrator | 2025-07-05 23:13:02 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:02.086607 | orchestrator | 2025-07-05 23:13:02 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:02.087867 | orchestrator | 2025-07-05 23:13:02 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:02.088942 | orchestrator | 2025-07-05 23:13:02 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:13:02.089283 | orchestrator | 2025-07-05 23:13:02 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:05.125902 | orchestrator | 2025-07-05 23:13:05 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:05.127228 | orchestrator | 2025-07-05 23:13:05 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:05.129945 | orchestrator | 2025-07-05 23:13:05 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:05.130862 | orchestrator | 2025-07-05 23:13:05 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state STARTED 2025-07-05 23:13:05.132385 | orchestrator | 2025-07-05 23:13:05 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:08.186611 | orchestrator | 2025-07-05 23:13:08 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:08.186726 | orchestrator | 2025-07-05 23:13:08 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:08.187558 | orchestrator | 2025-07-05 23:13:08 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:08.188397 | orchestrator | 2025-07-05 23:13:08 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:08.191421 | orchestrator | 2025-07-05 23:13:08 | INFO  | Task 66c97fa2-0427-4251-830f-914f83e1286f is in state SUCCESS 2025-07-05 23:13:08.193608 | orchestrator | 2025-07-05 23:13:08.193664 | orchestrator | 2025-07-05 23:13:08.193686 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:13:08.193706 | orchestrator | 2025-07-05 23:13:08.193723 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:13:08.193742 | orchestrator | Saturday 05 July 2025 23:09:54 +0000 (0:00:00.283) 0:00:00.283 ********* 2025-07-05 23:13:08.194007 | orchestrator | ok: [testbed-manager] 2025-07-05 23:13:08.194386 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:13:08.194410 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:13:08.194428 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:13:08.194447 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:13:08.194465 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:13:08.194483 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:13:08.194502 | orchestrator | 2025-07-05 23:13:08.194522 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:13:08.194543 | orchestrator | Saturday 05 July 2025 23:09:55 +0000 (0:00:00.914) 0:00:01.197 ********* 2025-07-05 23:13:08.194565 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-07-05 23:13:08.194637 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-07-05 23:13:08.194715 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-07-05 23:13:08.194735 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-07-05 23:13:08.194806 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-07-05 23:13:08.194921 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-07-05 23:13:08.194945 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-07-05 23:13:08.194963 | orchestrator | 2025-07-05 23:13:08.194981 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-07-05 23:13:08.195001 | orchestrator | 2025-07-05 23:13:08.195020 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-05 23:13:08.195169 | orchestrator | Saturday 05 July 2025 23:09:56 +0000 (0:00:00.754) 0:00:01.952 ********* 2025-07-05 23:13:08.195211 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:13:08.195235 | orchestrator | 2025-07-05 23:13:08.195254 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-07-05 23:13:08.195275 | orchestrator | Saturday 05 July 2025 23:09:58 +0000 (0:00:02.007) 0:00:03.959 ********* 2025-07-05 23:13:08.195300 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-05 23:13:08.195326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.195370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.195391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.195448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.195470 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.195492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.195524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.195546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.195567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.195602 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.195622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.195657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.195679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.195702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.195744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.195850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.195883 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-05 23:13:08.195904 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.195935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.195953 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.195971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.195995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196138 | orchestrator | 2025-07-05 23:13:08.196155 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-05 23:13:08.196172 | orchestrator | Saturday 05 July 2025 23:10:02 +0000 (0:00:03.715) 0:00:07.674 ********* 2025-07-05 23:13:08.196189 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:13:08.196207 | orchestrator | 2025-07-05 23:13:08.196223 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-07-05 23:13:08.196245 | orchestrator | Saturday 05 July 2025 23:10:03 +0000 (0:00:01.282) 0:00:08.957 ********* 2025-07-05 23:13:08.196264 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-05 23:13:08.196292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.196310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.196327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.196350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.196360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.196371 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.196386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.196404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196415 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196489 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-05 23:13:08.196507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196555 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196590 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.196669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.196724 | orchestrator | 2025-07-05 23:13:08.196735 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-07-05 23:13:08.196745 | orchestrator | Saturday 05 July 2025 23:10:09 +0000 (0:00:06.145) 0:00:15.102 ********* 2025-07-05 23:13:08.196792 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-05 23:13:08.196821 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.196832 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.196843 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-05 23:13:08.196864 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.196881 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:13:08.196893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.196903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.196928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.196939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.196950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.196960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.196970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.196993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197088 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.197099 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.197108 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.197125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197171 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.197186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197218 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.197228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197273 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.197284 | orchestrator | 2025-07-05 23:13:08.197301 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-07-05 23:13:08.197317 | orchestrator | Saturday 05 July 2025 23:10:10 +0000 (0:00:01.316) 0:00:16.418 ********* 2025-07-05 23:13:08.197333 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-05 23:13:08.197356 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197376 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-05 23:13:08.197400 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197424 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:13:08.197450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197532 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.197542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197632 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.197656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-05 23:13:08.197729 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.197745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197803 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.197817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197849 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.197859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-05 23:13:08.197875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-05 23:13:08.197903 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.197913 | orchestrator | 2025-07-05 23:13:08.197923 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-07-05 23:13:08.197934 | orchestrator | Saturday 05 July 2025 23:10:12 +0000 (0:00:01.581) 0:00:18.000 ********* 2025-07-05 23:13:08.197944 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-05 23:13:08.197959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.197970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.197981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.198006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.198048 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.198226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.198243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.198254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.198271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.198282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.198293 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.198312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.198322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.198340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.198351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.198366 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-05 23:13:08.198378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.198389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.198406 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.198417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.198432 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.198442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.198453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.198470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.198488 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.198516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.198533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.198550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.198567 | orchestrator | 2025-07-05 23:13:08.198583 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-07-05 23:13:08.198600 | orchestrator | Saturday 05 July 2025 23:10:18 +0000 (0:00:05.899) 0:00:23.900 ********* 2025-07-05 23:13:08.198616 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 23:13:08.198631 | orchestrator | 2025-07-05 23:13:08.198647 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-07-05 23:13:08.198671 | orchestrator | Saturday 05 July 2025 23:10:19 +0000 (0:00:00.896) 0:00:24.797 ********* 2025-07-05 23:13:08.198689 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082642, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1064546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.198706 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082642, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1064546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.198793 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082642, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1064546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.198825 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082642, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1064546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.198843 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082642, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1064546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.198861 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082642, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1064546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.198887 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082628, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.198905 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082628, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.198923 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082628, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.198946 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082642, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1064546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.198972 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082628, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.198990 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082600, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199007 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082628, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199031 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082600, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199048 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082628, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199064 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082600, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199086 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082602, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199111 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082600, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199127 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082628, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.199144 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082600, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199160 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082602, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199187 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082625, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199204 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082602, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199237 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082602, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199255 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082600, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199272 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082602, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199289 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082625, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199305 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082625, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199331 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082600, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.199343 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082625, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199369 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082602, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199380 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082625, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199391 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082609, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199401 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082609, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199412 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082622, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199428 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082609, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199438 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082609, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199462 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082625, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199480 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082609, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199497 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082631, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199514 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082622, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199530 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082609, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199555 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082640, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199574 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082622, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199608 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082602, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.199627 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082622, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199644 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082622, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199661 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082631, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199679 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082631, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199707 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1082663, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199735 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082622, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199830 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082631, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199854 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082631, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199873 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082633, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199891 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082640, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199910 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082640, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199939 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082605, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1014545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.199981 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082640, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200008 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082631, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200026 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082625, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.200044 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082640, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200063 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1082663, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200074 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1082663, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200091 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1082663, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200116 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082619, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200129 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082640, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200138 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082633, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200146 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082633, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200155 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1082663, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200163 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082633, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200183 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082609, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.200192 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082633, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200203 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082605, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1014545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200212 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082599, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200221 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1082663, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200229 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082605, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1014545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200238 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082605, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1014545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200264 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082605, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1014545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200278 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082626, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200301 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082619, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200315 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082619, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200330 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082633, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200343 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082619, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200352 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082619, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200371 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082599, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200383 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082622, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.200401 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082599, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200415 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082599, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200428 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082660, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200441 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082605, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1014545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200455 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082626, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200569 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082599, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200588 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082615, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200602 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082626, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200623 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082619, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200637 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082626, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200651 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082660, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200673 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082599, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200695 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082646, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1074548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200709 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082660, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200724 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.200738 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082660, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200783 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082626, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200795 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082626, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200804 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082631, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1044545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.200819 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082615, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200832 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082660, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200841 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082615, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200849 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082615, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200861 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082660, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200870 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082646, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1074548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200878 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.200891 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082615, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200900 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082615, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200912 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082646, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1074548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200921 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.200929 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082646, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1074548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200938 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.200950 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082646, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1074548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200958 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.200967 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082646, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1074548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-05 23:13:08.200975 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.200983 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082640, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.200997 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1082663, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.201005 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082633, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1054547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.201019 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082605, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1014545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.201028 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082619, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.201040 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082599, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1004546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.201049 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082626, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1034546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.201062 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082660, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1094546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.201070 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082615, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1024547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.201079 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082646, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.1074548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-05 23:13:08.201088 | orchestrator | 2025-07-05 23:13:08.201096 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-07-05 23:13:08.201105 | orchestrator | Saturday 05 July 2025 23:10:40 +0000 (0:00:21.208) 0:00:46.005 ********* 2025-07-05 23:13:08.201117 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 23:13:08.201126 | orchestrator | 2025-07-05 23:13:08.201134 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-07-05 23:13:08.201142 | orchestrator | Saturday 05 July 2025 23:10:41 +0000 (0:00:00.816) 0:00:46.822 ********* 2025-07-05 23:13:08.201151 | orchestrator | [WARNING]: Skipped 2025-07-05 23:13:08.201159 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201167 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-07-05 23:13:08.201175 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201183 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-07-05 23:13:08.201191 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 23:13:08.201200 | orchestrator | [WARNING]: Skipped 2025-07-05 23:13:08.201208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201216 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-07-05 23:13:08.201224 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201232 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-07-05 23:13:08.201244 | orchestrator | [WARNING]: Skipped 2025-07-05 23:13:08.201257 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201270 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-07-05 23:13:08.201282 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201295 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-07-05 23:13:08.201309 | orchestrator | [WARNING]: Skipped 2025-07-05 23:13:08.201322 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201344 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-07-05 23:13:08.201357 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201382 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-07-05 23:13:08.201396 | orchestrator | [WARNING]: Skipped 2025-07-05 23:13:08.201409 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201421 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-07-05 23:13:08.201435 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201448 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-07-05 23:13:08.201463 | orchestrator | [WARNING]: Skipped 2025-07-05 23:13:08.201476 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201490 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-07-05 23:13:08.201500 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201508 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-07-05 23:13:08.201516 | orchestrator | [WARNING]: Skipped 2025-07-05 23:13:08.201524 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201532 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-07-05 23:13:08.201540 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-05 23:13:08.201548 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-07-05 23:13:08.201556 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 23:13:08.201568 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-05 23:13:08.201582 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-05 23:13:08.201595 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-05 23:13:08.201608 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-05 23:13:08.201620 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-05 23:13:08.201633 | orchestrator | 2025-07-05 23:13:08.201645 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-07-05 23:13:08.201658 | orchestrator | Saturday 05 July 2025 23:10:44 +0000 (0:00:02.730) 0:00:49.552 ********* 2025-07-05 23:13:08.201671 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-05 23:13:08.201684 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-05 23:13:08.201698 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.201711 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.201724 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-05 23:13:08.201732 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.201740 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-05 23:13:08.201748 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.201776 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-05 23:13:08.201784 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.201792 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-05 23:13:08.201800 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.201808 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-07-05 23:13:08.201817 | orchestrator | 2025-07-05 23:13:08.201825 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-07-05 23:13:08.201833 | orchestrator | Saturday 05 July 2025 23:11:01 +0000 (0:00:17.299) 0:01:06.852 ********* 2025-07-05 23:13:08.201847 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-05 23:13:08.201862 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.201871 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-05 23:13:08.201879 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.201887 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-05 23:13:08.201895 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.201903 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-05 23:13:08.201911 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.201919 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-05 23:13:08.201927 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.201935 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-05 23:13:08.201943 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.201951 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-07-05 23:13:08.201959 | orchestrator | 2025-07-05 23:13:08.201967 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-07-05 23:13:08.201976 | orchestrator | Saturday 05 July 2025 23:11:05 +0000 (0:00:04.233) 0:01:11.085 ********* 2025-07-05 23:13:08.201984 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-05 23:13:08.201993 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.202001 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-05 23:13:08.202009 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.202049 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-07-05 23:13:08.202059 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-05 23:13:08.202067 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.202075 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-05 23:13:08.202083 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.202091 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-05 23:13:08.202099 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.202107 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-05 23:13:08.202116 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.202124 | orchestrator | 2025-07-05 23:13:08.202132 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-07-05 23:13:08.202140 | orchestrator | Saturday 05 July 2025 23:11:08 +0000 (0:00:02.831) 0:01:13.916 ********* 2025-07-05 23:13:08.202148 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 23:13:08.202156 | orchestrator | 2025-07-05 23:13:08.202164 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-07-05 23:13:08.202172 | orchestrator | Saturday 05 July 2025 23:11:09 +0000 (0:00:00.956) 0:01:14.873 ********* 2025-07-05 23:13:08.202179 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:13:08.202187 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.202195 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.202203 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.202211 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.202219 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.202227 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.202241 | orchestrator | 2025-07-05 23:13:08.202249 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-07-05 23:13:08.202257 | orchestrator | Saturday 05 July 2025 23:11:10 +0000 (0:00:01.332) 0:01:16.205 ********* 2025-07-05 23:13:08.202265 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:13:08.202273 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.202281 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.202289 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:08.202297 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.202306 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:13:08.202313 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:13:08.202321 | orchestrator | 2025-07-05 23:13:08.202329 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-07-05 23:13:08.202338 | orchestrator | Saturday 05 July 2025 23:11:13 +0000 (0:00:03.047) 0:01:19.253 ********* 2025-07-05 23:13:08.202346 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-05 23:13:08.202354 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-05 23:13:08.202362 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-05 23:13:08.202370 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-05 23:13:08.202378 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:13:08.202386 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-05 23:13:08.202394 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.202407 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.202415 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.202423 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.202432 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-05 23:13:08.202440 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.202448 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-05 23:13:08.202456 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.202466 | orchestrator | 2025-07-05 23:13:08.202479 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-07-05 23:13:08.202492 | orchestrator | Saturday 05 July 2025 23:11:16 +0000 (0:00:02.971) 0:01:22.224 ********* 2025-07-05 23:13:08.202505 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-05 23:13:08.202518 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-07-05 23:13:08.202531 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.202544 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-05 23:13:08.202556 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-05 23:13:08.202569 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.202582 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.202595 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-05 23:13:08.202608 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.202622 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-05 23:13:08.202635 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.202655 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-05 23:13:08.202670 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.202683 | orchestrator | 2025-07-05 23:13:08.202707 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-07-05 23:13:08.202720 | orchestrator | Saturday 05 July 2025 23:11:19 +0000 (0:00:02.771) 0:01:24.995 ********* 2025-07-05 23:13:08.202731 | orchestrator | [WARNING]: Skipped 2025-07-05 23:13:08.202739 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-07-05 23:13:08.202747 | orchestrator | due to this access issue: 2025-07-05 23:13:08.202804 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-07-05 23:13:08.202814 | orchestrator | not a directory 2025-07-05 23:13:08.202822 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-05 23:13:08.202830 | orchestrator | 2025-07-05 23:13:08.202838 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-07-05 23:13:08.202846 | orchestrator | Saturday 05 July 2025 23:11:21 +0000 (0:00:02.053) 0:01:27.049 ********* 2025-07-05 23:13:08.202854 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:13:08.202862 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.202870 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.202878 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.202886 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.202894 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.202902 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.202910 | orchestrator | 2025-07-05 23:13:08.202918 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-07-05 23:13:08.202926 | orchestrator | Saturday 05 July 2025 23:11:22 +0000 (0:00:00.934) 0:01:27.984 ********* 2025-07-05 23:13:08.202934 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:13:08.202942 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:08.202950 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:08.202958 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:08.202967 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:08.202979 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:08.202993 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:08.203007 | orchestrator | 2025-07-05 23:13:08.203018 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-07-05 23:13:08.203026 | orchestrator | Saturday 05 July 2025 23:11:23 +0000 (0:00:01.419) 0:01:29.403 ********* 2025-07-05 23:13:08.203035 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-05 23:13:08.203053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.203063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.203078 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.203091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.203100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.203108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.203117 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-05 23:13:08.203126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.203139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.203148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.203162 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.203175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.203184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.203193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.203202 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-05 23:13:08.203217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.203231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.203240 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.203252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.203261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.203270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.203278 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.203287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.203298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.203315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-05 23:13:08.203322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.203333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.203340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-05 23:13:08.203347 | orchestrator | 2025-07-05 23:13:08.203354 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-07-05 23:13:08.203361 | orchestrator | Saturday 05 July 2025 23:11:28 +0000 (0:00:04.635) 0:01:34.039 ********* 2025-07-05 23:13:08.203368 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-05 23:13:08.203375 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:13:08.203382 | orchestrator | 2025-07-05 23:13:08.203389 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-05 23:13:08.203396 | orchestrator | Saturday 05 July 2025 23:11:29 +0000 (0:00:00.972) 0:01:35.012 ********* 2025-07-05 23:13:08.203402 | orchestrator | 2025-07-05 23:13:08.203409 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-05 23:13:08.203416 | orchestrator | Saturday 05 July 2025 23:11:29 +0000 (0:00:00.182) 0:01:35.194 ********* 2025-07-05 23:13:08.203423 | orchestrator | 2025-07-05 23:13:08.203430 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-05 23:13:08.203437 | orchestrator | Saturday 05 July 2025 23:11:29 +0000 (0:00:00.065) 0:01:35.259 ********* 2025-07-05 23:13:08.203443 | orchestrator | 2025-07-05 23:13:08.203450 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-05 23:13:08.203457 | orchestrator | Saturday 05 July 2025 23:11:29 +0000 (0:00:00.062) 0:01:35.322 ********* 2025-07-05 23:13:08.203464 | orchestrator | 2025-07-05 23:13:08.203471 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-05 23:13:08.203478 | orchestrator | Saturday 05 July 2025 23:11:29 +0000 (0:00:00.064) 0:01:35.387 ********* 2025-07-05 23:13:08.203488 | orchestrator | 2025-07-05 23:13:08.203495 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-05 23:13:08.203502 | orchestrator | Saturday 05 July 2025 23:11:29 +0000 (0:00:00.056) 0:01:35.443 ********* 2025-07-05 23:13:08.203509 | orchestrator | 2025-07-05 23:13:08.203516 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-05 23:13:08.203522 | orchestrator | Saturday 05 July 2025 23:11:29 +0000 (0:00:00.067) 0:01:35.510 ********* 2025-07-05 23:13:08.203529 | orchestrator | 2025-07-05 23:13:08.203536 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-07-05 23:13:08.203543 | orchestrator | Saturday 05 July 2025 23:11:30 +0000 (0:00:00.069) 0:01:35.579 ********* 2025-07-05 23:13:08.203550 | orchestrator | changed: [testbed-manager] 2025-07-05 23:13:08.203557 | orchestrator | 2025-07-05 23:13:08.203564 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-07-05 23:13:08.203578 | orchestrator | Saturday 05 July 2025 23:11:47 +0000 (0:00:17.626) 0:01:53.206 ********* 2025-07-05 23:13:08.203590 | orchestrator | changed: [testbed-manager] 2025-07-05 23:13:08.203601 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:13:08.203611 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:13:08.203623 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:13:08.203635 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:08.203647 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:13:08.203654 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:13:08.203661 | orchestrator | 2025-07-05 23:13:08.203668 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-07-05 23:13:08.203674 | orchestrator | Saturday 05 July 2025 23:12:00 +0000 (0:00:13.132) 0:02:06.339 ********* 2025-07-05 23:13:08.203681 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:08.203688 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:13:08.203694 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:13:08.203701 | orchestrator | 2025-07-05 23:13:08.203708 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-07-05 23:13:08.203715 | orchestrator | Saturday 05 July 2025 23:12:06 +0000 (0:00:05.444) 0:02:11.783 ********* 2025-07-05 23:13:08.203721 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:13:08.203728 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:13:08.203735 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:08.203741 | orchestrator | 2025-07-05 23:13:08.203748 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-07-05 23:13:08.203772 | orchestrator | Saturday 05 July 2025 23:12:16 +0000 (0:00:10.261) 0:02:22.044 ********* 2025-07-05 23:13:08.203779 | orchestrator | changed: [testbed-manager] 2025-07-05 23:13:08.203786 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:08.203793 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:13:08.203800 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:13:08.203807 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:13:08.203814 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:13:08.203820 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:13:08.203827 | orchestrator | 2025-07-05 23:13:08.203834 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-07-05 23:13:08.203847 | orchestrator | Saturday 05 July 2025 23:12:27 +0000 (0:00:10.583) 0:02:32.628 ********* 2025-07-05 23:13:08.203854 | orchestrator | changed: [testbed-manager] 2025-07-05 23:13:08.203861 | orchestrator | 2025-07-05 23:13:08.203868 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-07-05 23:13:08.203879 | orchestrator | Saturday 05 July 2025 23:12:39 +0000 (0:00:12.528) 0:02:45.156 ********* 2025-07-05 23:13:08.203889 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:13:08.203896 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:13:08.203902 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:08.203909 | orchestrator | 2025-07-05 23:13:08.203916 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-07-05 23:13:08.203928 | orchestrator | Saturday 05 July 2025 23:12:47 +0000 (0:00:08.005) 0:02:53.161 ********* 2025-07-05 23:13:08.203935 | orchestrator | changed: [testbed-manager] 2025-07-05 23:13:08.203942 | orchestrator | 2025-07-05 23:13:08.203949 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-07-05 23:13:08.203956 | orchestrator | Saturday 05 July 2025 23:12:53 +0000 (0:00:05.796) 0:02:58.958 ********* 2025-07-05 23:13:08.203962 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:13:08.203969 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:13:08.203976 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:13:08.203983 | orchestrator | 2025-07-05 23:13:08.203990 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:13:08.203997 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-05 23:13:08.204004 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-05 23:13:08.204011 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-05 23:13:08.204018 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-05 23:13:08.204025 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-05 23:13:08.204032 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-05 23:13:08.204039 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-05 23:13:08.204046 | orchestrator | 2025-07-05 23:13:08.204053 | orchestrator | 2025-07-05 23:13:08.204060 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:13:08.204066 | orchestrator | Saturday 05 July 2025 23:13:04 +0000 (0:00:10.631) 0:03:09.589 ********* 2025-07-05 23:13:08.204073 | orchestrator | =============================================================================== 2025-07-05 23:13:08.204080 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.21s 2025-07-05 23:13:08.204087 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.63s 2025-07-05 23:13:08.204094 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.30s 2025-07-05 23:13:08.204100 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.13s 2025-07-05 23:13:08.204111 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.53s 2025-07-05 23:13:08.204118 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.63s 2025-07-05 23:13:08.204125 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 10.58s 2025-07-05 23:13:08.204132 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.26s 2025-07-05 23:13:08.204139 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 8.01s 2025-07-05 23:13:08.204146 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.15s 2025-07-05 23:13:08.204153 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.90s 2025-07-05 23:13:08.204159 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.80s 2025-07-05 23:13:08.204166 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.44s 2025-07-05 23:13:08.204173 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.64s 2025-07-05 23:13:08.204180 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.23s 2025-07-05 23:13:08.204191 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.72s 2025-07-05 23:13:08.204198 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.05s 2025-07-05 23:13:08.204204 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.97s 2025-07-05 23:13:08.204211 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.83s 2025-07-05 23:13:08.204218 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.77s 2025-07-05 23:13:08.204225 | orchestrator | 2025-07-05 23:13:08 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:11.251232 | orchestrator | 2025-07-05 23:13:11 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:11.254599 | orchestrator | 2025-07-05 23:13:11 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:11.257907 | orchestrator | 2025-07-05 23:13:11 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:11.259390 | orchestrator | 2025-07-05 23:13:11 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:11.259518 | orchestrator | 2025-07-05 23:13:11 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:14.308703 | orchestrator | 2025-07-05 23:13:14 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:14.310147 | orchestrator | 2025-07-05 23:13:14 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:14.312187 | orchestrator | 2025-07-05 23:13:14 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:14.313338 | orchestrator | 2025-07-05 23:13:14 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:14.314327 | orchestrator | 2025-07-05 23:13:14 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:17.363255 | orchestrator | 2025-07-05 23:13:17 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:17.365422 | orchestrator | 2025-07-05 23:13:17 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:17.367406 | orchestrator | 2025-07-05 23:13:17 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:17.369532 | orchestrator | 2025-07-05 23:13:17 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:17.369935 | orchestrator | 2025-07-05 23:13:17 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:20.405710 | orchestrator | 2025-07-05 23:13:20 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:20.407122 | orchestrator | 2025-07-05 23:13:20 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:20.407163 | orchestrator | 2025-07-05 23:13:20 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:20.408095 | orchestrator | 2025-07-05 23:13:20 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:20.408141 | orchestrator | 2025-07-05 23:13:20 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:23.456665 | orchestrator | 2025-07-05 23:13:23 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:23.458467 | orchestrator | 2025-07-05 23:13:23 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:23.460185 | orchestrator | 2025-07-05 23:13:23 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:23.461927 | orchestrator | 2025-07-05 23:13:23 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:23.461976 | orchestrator | 2025-07-05 23:13:23 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:26.505525 | orchestrator | 2025-07-05 23:13:26 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:26.507167 | orchestrator | 2025-07-05 23:13:26 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:26.509330 | orchestrator | 2025-07-05 23:13:26 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:26.510510 | orchestrator | 2025-07-05 23:13:26 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:26.510721 | orchestrator | 2025-07-05 23:13:26 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:29.554221 | orchestrator | 2025-07-05 23:13:29 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:29.556234 | orchestrator | 2025-07-05 23:13:29 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:29.559426 | orchestrator | 2025-07-05 23:13:29 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:29.561192 | orchestrator | 2025-07-05 23:13:29 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:29.561711 | orchestrator | 2025-07-05 23:13:29 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:32.599399 | orchestrator | 2025-07-05 23:13:32 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:32.601473 | orchestrator | 2025-07-05 23:13:32 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:32.603049 | orchestrator | 2025-07-05 23:13:32 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:32.604769 | orchestrator | 2025-07-05 23:13:32 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:32.604804 | orchestrator | 2025-07-05 23:13:32 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:35.644874 | orchestrator | 2025-07-05 23:13:35 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:35.645659 | orchestrator | 2025-07-05 23:13:35 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:35.647387 | orchestrator | 2025-07-05 23:13:35 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:35.648467 | orchestrator | 2025-07-05 23:13:35 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:35.648492 | orchestrator | 2025-07-05 23:13:35 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:38.689260 | orchestrator | 2025-07-05 23:13:38 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:38.689551 | orchestrator | 2025-07-05 23:13:38 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:38.690438 | orchestrator | 2025-07-05 23:13:38 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:38.692270 | orchestrator | 2025-07-05 23:13:38 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:38.692376 | orchestrator | 2025-07-05 23:13:38 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:41.728809 | orchestrator | 2025-07-05 23:13:41 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:41.732433 | orchestrator | 2025-07-05 23:13:41 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:41.732493 | orchestrator | 2025-07-05 23:13:41 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:41.732506 | orchestrator | 2025-07-05 23:13:41 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:41.732517 | orchestrator | 2025-07-05 23:13:41 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:44.769062 | orchestrator | 2025-07-05 23:13:44 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:44.770820 | orchestrator | 2025-07-05 23:13:44 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:44.771696 | orchestrator | 2025-07-05 23:13:44 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:44.772526 | orchestrator | 2025-07-05 23:13:44 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:44.772731 | orchestrator | 2025-07-05 23:13:44 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:47.814899 | orchestrator | 2025-07-05 23:13:47 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:47.816518 | orchestrator | 2025-07-05 23:13:47 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:47.818153 | orchestrator | 2025-07-05 23:13:47 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:47.819689 | orchestrator | 2025-07-05 23:13:47 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:47.820302 | orchestrator | 2025-07-05 23:13:47 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:50.854873 | orchestrator | 2025-07-05 23:13:50 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:50.855255 | orchestrator | 2025-07-05 23:13:50 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:50.858696 | orchestrator | 2025-07-05 23:13:50 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:50.859617 | orchestrator | 2025-07-05 23:13:50 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:50.859648 | orchestrator | 2025-07-05 23:13:50 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:53.898959 | orchestrator | 2025-07-05 23:13:53 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:53.900983 | orchestrator | 2025-07-05 23:13:53 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:53.901188 | orchestrator | 2025-07-05 23:13:53 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:53.901721 | orchestrator | 2025-07-05 23:13:53 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:53.901772 | orchestrator | 2025-07-05 23:13:53 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:56.940652 | orchestrator | 2025-07-05 23:13:56 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:56.940830 | orchestrator | 2025-07-05 23:13:56 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:56.941271 | orchestrator | 2025-07-05 23:13:56 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state STARTED 2025-07-05 23:13:56.941894 | orchestrator | 2025-07-05 23:13:56 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:56.941923 | orchestrator | 2025-07-05 23:13:56 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:13:59.984197 | orchestrator | 2025-07-05 23:13:59 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:13:59.984441 | orchestrator | 2025-07-05 23:13:59 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:13:59.986191 | orchestrator | 2025-07-05 23:13:59 | INFO  | Task afe141e0-c79e-4d10-a56e-b8f8467a1dc7 is in state SUCCESS 2025-07-05 23:13:59.987492 | orchestrator | 2025-07-05 23:13:59.987537 | orchestrator | 2025-07-05 23:13:59.987558 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:13:59.987579 | orchestrator | 2025-07-05 23:13:59.987597 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:13:59.987617 | orchestrator | Saturday 05 July 2025 23:10:06 +0000 (0:00:00.211) 0:00:00.211 ********* 2025-07-05 23:13:59.987634 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:13:59.987824 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:13:59.987855 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:13:59.987874 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:13:59.987894 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:13:59.987912 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:13:59.987930 | orchestrator | 2025-07-05 23:13:59.987950 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:13:59.987970 | orchestrator | Saturday 05 July 2025 23:10:07 +0000 (0:00:00.622) 0:00:00.834 ********* 2025-07-05 23:13:59.987986 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-07-05 23:13:59.987998 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-07-05 23:13:59.988814 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-07-05 23:13:59.988845 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-07-05 23:13:59.988856 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-07-05 23:13:59.988868 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-07-05 23:13:59.988879 | orchestrator | 2025-07-05 23:13:59.988891 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-07-05 23:13:59.988903 | orchestrator | 2025-07-05 23:13:59.988914 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-05 23:13:59.988926 | orchestrator | Saturday 05 July 2025 23:10:07 +0000 (0:00:00.583) 0:00:01.417 ********* 2025-07-05 23:13:59.988938 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:13:59.988951 | orchestrator | 2025-07-05 23:13:59.989021 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-07-05 23:13:59.989036 | orchestrator | Saturday 05 July 2025 23:10:08 +0000 (0:00:01.048) 0:00:02.466 ********* 2025-07-05 23:13:59.989048 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-07-05 23:13:59.989059 | orchestrator | 2025-07-05 23:13:59.989071 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-07-05 23:13:59.989082 | orchestrator | Saturday 05 July 2025 23:10:12 +0000 (0:00:03.573) 0:00:06.040 ********* 2025-07-05 23:13:59.989093 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-07-05 23:13:59.989105 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-07-05 23:13:59.989118 | orchestrator | 2025-07-05 23:13:59.989129 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-07-05 23:13:59.989140 | orchestrator | Saturday 05 July 2025 23:10:19 +0000 (0:00:06.712) 0:00:12.752 ********* 2025-07-05 23:13:59.989152 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-05 23:13:59.989163 | orchestrator | 2025-07-05 23:13:59.989175 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-07-05 23:13:59.989186 | orchestrator | Saturday 05 July 2025 23:10:22 +0000 (0:00:03.384) 0:00:16.137 ********* 2025-07-05 23:13:59.989197 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-05 23:13:59.989233 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-07-05 23:13:59.989245 | orchestrator | 2025-07-05 23:13:59.989256 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-07-05 23:13:59.989268 | orchestrator | Saturday 05 July 2025 23:10:26 +0000 (0:00:04.136) 0:00:20.274 ********* 2025-07-05 23:13:59.989279 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-05 23:13:59.989290 | orchestrator | 2025-07-05 23:13:59.989301 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-07-05 23:13:59.989326 | orchestrator | Saturday 05 July 2025 23:10:30 +0000 (0:00:03.438) 0:00:23.712 ********* 2025-07-05 23:13:59.989338 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-07-05 23:13:59.989349 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-07-05 23:13:59.989360 | orchestrator | 2025-07-05 23:13:59.989371 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-07-05 23:13:59.989382 | orchestrator | Saturday 05 July 2025 23:10:38 +0000 (0:00:08.049) 0:00:31.762 ********* 2025-07-05 23:13:59.989397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.989462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.989477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.989490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.989517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.989530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.989578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.989600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.989614 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.989647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.989671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.989686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.989700 | orchestrator | 2025-07-05 23:13:59.989773 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-05 23:13:59.989789 | orchestrator | Saturday 05 July 2025 23:10:40 +0000 (0:00:02.289) 0:00:34.053 ********* 2025-07-05 23:13:59.989802 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.989814 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:59.989825 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:59.989836 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:59.989847 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:59.989858 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:59.989869 | orchestrator | 2025-07-05 23:13:59.989880 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-05 23:13:59.989891 | orchestrator | Saturday 05 July 2025 23:10:40 +0000 (0:00:00.525) 0:00:34.578 ********* 2025-07-05 23:13:59.989902 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.989913 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:59.989924 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:59.989935 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:13:59.989946 | orchestrator | 2025-07-05 23:13:59.989957 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-07-05 23:13:59.989968 | orchestrator | Saturday 05 July 2025 23:10:42 +0000 (0:00:01.619) 0:00:36.197 ********* 2025-07-05 23:13:59.989979 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-07-05 23:13:59.989990 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-07-05 23:13:59.990009 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-07-05 23:13:59.990072 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-07-05 23:13:59.990085 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-07-05 23:13:59.990097 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-07-05 23:13:59.990117 | orchestrator | 2025-07-05 23:13:59.990128 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-07-05 23:13:59.990139 | orchestrator | Saturday 05 July 2025 23:10:45 +0000 (0:00:02.696) 0:00:38.894 ********* 2025-07-05 23:13:59.990152 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-05 23:13:59.990166 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-05 23:13:59.990178 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-05 23:13:59.990231 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-05 23:13:59.990246 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-05 23:13:59.990265 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-05 23:13:59.990312 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-05 23:13:59.990326 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-05 23:13:59.990372 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-05 23:13:59.990393 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-05 23:13:59.990406 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-05 23:13:59.990423 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-05 23:13:59.990436 | orchestrator | 2025-07-05 23:13:59.990447 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-07-05 23:13:59.990459 | orchestrator | Saturday 05 July 2025 23:10:49 +0000 (0:00:04.154) 0:00:43.049 ********* 2025-07-05 23:13:59.990470 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-05 23:13:59.990482 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-05 23:13:59.990493 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-05 23:13:59.990504 | orchestrator | 2025-07-05 23:13:59.990516 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-07-05 23:13:59.990527 | orchestrator | Saturday 05 July 2025 23:10:51 +0000 (0:00:01.920) 0:00:44.969 ********* 2025-07-05 23:13:59.990538 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-07-05 23:13:59.990549 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-07-05 23:13:59.990560 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-07-05 23:13:59.990571 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-07-05 23:13:59.990583 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-07-05 23:13:59.990625 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-07-05 23:13:59.990638 | orchestrator | 2025-07-05 23:13:59.990656 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-07-05 23:13:59.990667 | orchestrator | Saturday 05 July 2025 23:10:54 +0000 (0:00:03.108) 0:00:48.079 ********* 2025-07-05 23:13:59.990678 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-07-05 23:13:59.990689 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-07-05 23:13:59.990700 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-07-05 23:13:59.990711 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-07-05 23:13:59.990722 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-07-05 23:13:59.990733 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-07-05 23:13:59.990814 | orchestrator | 2025-07-05 23:13:59.990825 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-07-05 23:13:59.990836 | orchestrator | Saturday 05 July 2025 23:10:55 +0000 (0:00:01.183) 0:00:49.262 ********* 2025-07-05 23:13:59.990847 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.990859 | orchestrator | 2025-07-05 23:13:59.990870 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-07-05 23:13:59.990881 | orchestrator | Saturday 05 July 2025 23:10:55 +0000 (0:00:00.111) 0:00:49.374 ********* 2025-07-05 23:13:59.990892 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.990902 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:59.990912 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:59.990921 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:59.990931 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:59.990941 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:59.990951 | orchestrator | 2025-07-05 23:13:59.990960 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-05 23:13:59.990970 | orchestrator | Saturday 05 July 2025 23:10:56 +0000 (0:00:00.985) 0:00:50.359 ********* 2025-07-05 23:13:59.990982 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:13:59.990993 | orchestrator | 2025-07-05 23:13:59.991002 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-07-05 23:13:59.991012 | orchestrator | Saturday 05 July 2025 23:10:58 +0000 (0:00:01.901) 0:00:52.261 ********* 2025-07-05 23:13:59.991023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.991039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.991095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.991120 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991228 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991250 | orchestrator | 2025-07-05 23:13:59.991261 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-07-05 23:13:59.991272 | orchestrator | Saturday 05 July 2025 23:11:02 +0000 (0:00:03.490) 0:00:55.751 ********* 2025-07-05 23:13:59.991287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:13:59.991310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:13:59.991333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991344 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.991355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:13:59.991370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991387 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:59.991398 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:59.991409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991440 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:59.991451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991473 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:59.991488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991516 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:59.991527 | orchestrator | 2025-07-05 23:13:59.991537 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-07-05 23:13:59.991547 | orchestrator | Saturday 05 July 2025 23:11:04 +0000 (0:00:01.937) 0:00:57.688 ********* 2025-07-05 23:13:59.991564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:13:59.991576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991587 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.991598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:13:59.991614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991629 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:59.991639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:13:59.991657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991669 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:59.991679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991711 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:59.991728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991819 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:59.991847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.991868 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:59.991878 | orchestrator | 2025-07-05 23:13:59.991888 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-07-05 23:13:59.991898 | orchestrator | Saturday 05 July 2025 23:11:05 +0000 (0:00:01.577) 0:00:59.266 ********* 2025-07-05 23:13:59.991909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.991930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.991941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.991958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991968 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.991999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992077 | orchestrator | 2025-07-05 23:13:59.992088 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-07-05 23:13:59.992098 | orchestrator | Saturday 05 July 2025 23:11:09 +0000 (0:00:03.954) 0:01:03.220 ********* 2025-07-05 23:13:59.992108 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-05 23:13:59.992118 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:59.992128 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-05 23:13:59.992137 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-05 23:13:59.992147 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:59.992157 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-05 23:13:59.992167 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-05 23:13:59.992177 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:59.992191 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-05 23:13:59.992201 | orchestrator | 2025-07-05 23:13:59.992211 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-07-05 23:13:59.992220 | orchestrator | Saturday 05 July 2025 23:11:12 +0000 (0:00:02.752) 0:01:05.973 ********* 2025-07-05 23:13:59.992231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.992247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.992258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.992274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992326 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.992392 | orchestrator | 2025-07-05 23:13:59.992408 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-07-05 23:13:59.992452 | orchestrator | Saturday 05 July 2025 23:11:22 +0000 (0:00:09.985) 0:01:15.958 ********* 2025-07-05 23:13:59.992477 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.992492 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:59.992507 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:59.992524 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:13:59.992539 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:13:59.992556 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:13:59.992572 | orchestrator | 2025-07-05 23:13:59.992587 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-07-05 23:13:59.992614 | orchestrator | Saturday 05 July 2025 23:11:25 +0000 (0:00:02.943) 0:01:18.901 ********* 2025-07-05 23:13:59.992625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:13:59.992636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.992646 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.992657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:13:59.992673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.992690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-05 23:13:59.992707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.992717 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:59.992727 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:59.992797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.992810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.992820 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:59.992835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.992846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.992863 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:59.992880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.992891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-05 23:13:59.992901 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:59.992911 | orchestrator | 2025-07-05 23:13:59.992921 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-07-05 23:13:59.992931 | orchestrator | Saturday 05 July 2025 23:11:26 +0000 (0:00:01.673) 0:01:20.575 ********* 2025-07-05 23:13:59.992941 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.992951 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:59.992960 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:59.992970 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:59.992980 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:59.992990 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:59.992999 | orchestrator | 2025-07-05 23:13:59.993009 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-07-05 23:13:59.993019 | orchestrator | Saturday 05 July 2025 23:11:27 +0000 (0:00:00.753) 0:01:21.328 ********* 2025-07-05 23:13:59.993034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.993045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.993067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-05 23:13:59.993078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.993089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.993107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.993118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.993140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.993151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.993160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.993169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.993181 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-05 23:13:59.993190 | orchestrator | 2025-07-05 23:13:59.993198 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-05 23:13:59.993213 | orchestrator | Saturday 05 July 2025 23:11:30 +0000 (0:00:02.683) 0:01:24.012 ********* 2025-07-05 23:13:59.993221 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.993229 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:13:59.993237 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:13:59.993245 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:13:59.993253 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:13:59.993261 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:13:59.993269 | orchestrator | 2025-07-05 23:13:59.993277 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-07-05 23:13:59.993285 | orchestrator | Saturday 05 July 2025 23:11:31 +0000 (0:00:01.055) 0:01:25.067 ********* 2025-07-05 23:13:59.993293 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:59.993315 | orchestrator | 2025-07-05 23:13:59.993323 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-07-05 23:13:59.993332 | orchestrator | Saturday 05 July 2025 23:11:33 +0000 (0:00:02.139) 0:01:27.207 ********* 2025-07-05 23:13:59.993340 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:59.993348 | orchestrator | 2025-07-05 23:13:59.993356 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-07-05 23:13:59.993364 | orchestrator | Saturday 05 July 2025 23:11:35 +0000 (0:00:02.298) 0:01:29.505 ********* 2025-07-05 23:13:59.993372 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:59.993380 | orchestrator | 2025-07-05 23:13:59.993388 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-05 23:13:59.993396 | orchestrator | Saturday 05 July 2025 23:11:54 +0000 (0:00:18.432) 0:01:47.938 ********* 2025-07-05 23:13:59.993404 | orchestrator | 2025-07-05 23:13:59.993416 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-05 23:13:59.993424 | orchestrator | Saturday 05 July 2025 23:11:54 +0000 (0:00:00.085) 0:01:48.023 ********* 2025-07-05 23:13:59.993432 | orchestrator | 2025-07-05 23:13:59.993440 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-05 23:13:59.993449 | orchestrator | Saturday 05 July 2025 23:11:54 +0000 (0:00:00.090) 0:01:48.113 ********* 2025-07-05 23:13:59.993457 | orchestrator | 2025-07-05 23:13:59.993465 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-05 23:13:59.993473 | orchestrator | Saturday 05 July 2025 23:11:54 +0000 (0:00:00.078) 0:01:48.191 ********* 2025-07-05 23:13:59.993481 | orchestrator | 2025-07-05 23:13:59.993489 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-05 23:13:59.993497 | orchestrator | Saturday 05 July 2025 23:11:54 +0000 (0:00:00.059) 0:01:48.251 ********* 2025-07-05 23:13:59.993505 | orchestrator | 2025-07-05 23:13:59.993513 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-05 23:13:59.993521 | orchestrator | Saturday 05 July 2025 23:11:54 +0000 (0:00:00.057) 0:01:48.308 ********* 2025-07-05 23:13:59.993529 | orchestrator | 2025-07-05 23:13:59.993537 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-07-05 23:13:59.993545 | orchestrator | Saturday 05 July 2025 23:11:54 +0000 (0:00:00.060) 0:01:48.368 ********* 2025-07-05 23:13:59.993553 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:59.993561 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:13:59.993569 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:13:59.993577 | orchestrator | 2025-07-05 23:13:59.993585 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-07-05 23:13:59.993593 | orchestrator | Saturday 05 July 2025 23:12:20 +0000 (0:00:25.707) 0:02:14.076 ********* 2025-07-05 23:13:59.993601 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:13:59.993609 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:13:59.993617 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:13:59.993625 | orchestrator | 2025-07-05 23:13:59.993633 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-07-05 23:13:59.993641 | orchestrator | Saturday 05 July 2025 23:12:33 +0000 (0:00:13.006) 0:02:27.082 ********* 2025-07-05 23:13:59.993661 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:13:59.993669 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:13:59.993677 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:13:59.993685 | orchestrator | 2025-07-05 23:13:59.993693 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-07-05 23:13:59.993701 | orchestrator | Saturday 05 July 2025 23:13:48 +0000 (0:01:14.764) 0:03:41.847 ********* 2025-07-05 23:13:59.993709 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:13:59.993716 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:13:59.993724 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:13:59.993732 | orchestrator | 2025-07-05 23:13:59.993761 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-07-05 23:13:59.993769 | orchestrator | Saturday 05 July 2025 23:13:56 +0000 (0:00:08.208) 0:03:50.056 ********* 2025-07-05 23:13:59.993777 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:13:59.993785 | orchestrator | 2025-07-05 23:13:59.993793 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:13:59.993801 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-05 23:13:59.993811 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-05 23:13:59.993819 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-05 23:13:59.993827 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-05 23:13:59.993840 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-05 23:13:59.993848 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-05 23:13:59.993856 | orchestrator | 2025-07-05 23:13:59.993864 | orchestrator | 2025-07-05 23:13:59.993872 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:13:59.993880 | orchestrator | Saturday 05 July 2025 23:13:57 +0000 (0:00:01.097) 0:03:51.154 ********* 2025-07-05 23:13:59.993888 | orchestrator | =============================================================================== 2025-07-05 23:13:59.993896 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 74.76s 2025-07-05 23:13:59.993904 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.71s 2025-07-05 23:13:59.993912 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.43s 2025-07-05 23:13:59.993920 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 13.01s 2025-07-05 23:13:59.993928 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.98s 2025-07-05 23:13:59.993936 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.21s 2025-07-05 23:13:59.993944 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.05s 2025-07-05 23:13:59.993952 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.71s 2025-07-05 23:13:59.993965 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.15s 2025-07-05 23:13:59.993973 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.14s 2025-07-05 23:13:59.993981 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.95s 2025-07-05 23:13:59.993989 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.57s 2025-07-05 23:13:59.993997 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.49s 2025-07-05 23:13:59.994005 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.44s 2025-07-05 23:13:59.994068 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.38s 2025-07-05 23:13:59.994079 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.11s 2025-07-05 23:13:59.994087 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.94s 2025-07-05 23:13:59.994095 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.75s 2025-07-05 23:13:59.994103 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.70s 2025-07-05 23:13:59.994111 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.68s 2025-07-05 23:13:59.994119 | orchestrator | 2025-07-05 23:13:59 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:13:59.994128 | orchestrator | 2025-07-05 23:13:59 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:13:59.994136 | orchestrator | 2025-07-05 23:13:59 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:03.032160 | orchestrator | 2025-07-05 23:14:03 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:03.038492 | orchestrator | 2025-07-05 23:14:03 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:03.039067 | orchestrator | 2025-07-05 23:14:03 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:03.039697 | orchestrator | 2025-07-05 23:14:03 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:03.039721 | orchestrator | 2025-07-05 23:14:03 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:06.070380 | orchestrator | 2025-07-05 23:14:06 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:06.070508 | orchestrator | 2025-07-05 23:14:06 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:06.070935 | orchestrator | 2025-07-05 23:14:06 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:06.071412 | orchestrator | 2025-07-05 23:14:06 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:06.071435 | orchestrator | 2025-07-05 23:14:06 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:09.106240 | orchestrator | 2025-07-05 23:14:09 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:09.106343 | orchestrator | 2025-07-05 23:14:09 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:09.106976 | orchestrator | 2025-07-05 23:14:09 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:09.107371 | orchestrator | 2025-07-05 23:14:09 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:09.107399 | orchestrator | 2025-07-05 23:14:09 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:12.140622 | orchestrator | 2025-07-05 23:14:12 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:12.140837 | orchestrator | 2025-07-05 23:14:12 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:12.141435 | orchestrator | 2025-07-05 23:14:12 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:12.142225 | orchestrator | 2025-07-05 23:14:12 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:12.142254 | orchestrator | 2025-07-05 23:14:12 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:15.162972 | orchestrator | 2025-07-05 23:14:15 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:15.163245 | orchestrator | 2025-07-05 23:14:15 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:15.163957 | orchestrator | 2025-07-05 23:14:15 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:15.164536 | orchestrator | 2025-07-05 23:14:15 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:15.164559 | orchestrator | 2025-07-05 23:14:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:18.188423 | orchestrator | 2025-07-05 23:14:18 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:18.188553 | orchestrator | 2025-07-05 23:14:18 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:18.188960 | orchestrator | 2025-07-05 23:14:18 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:18.189522 | orchestrator | 2025-07-05 23:14:18 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:18.189551 | orchestrator | 2025-07-05 23:14:18 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:21.214291 | orchestrator | 2025-07-05 23:14:21 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:21.214417 | orchestrator | 2025-07-05 23:14:21 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:21.215188 | orchestrator | 2025-07-05 23:14:21 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:21.215642 | orchestrator | 2025-07-05 23:14:21 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:21.215664 | orchestrator | 2025-07-05 23:14:21 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:24.245119 | orchestrator | 2025-07-05 23:14:24 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:24.245247 | orchestrator | 2025-07-05 23:14:24 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:24.245583 | orchestrator | 2025-07-05 23:14:24 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:24.246204 | orchestrator | 2025-07-05 23:14:24 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:24.246229 | orchestrator | 2025-07-05 23:14:24 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:27.274331 | orchestrator | 2025-07-05 23:14:27 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:27.277419 | orchestrator | 2025-07-05 23:14:27 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:27.278754 | orchestrator | 2025-07-05 23:14:27 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:27.279531 | orchestrator | 2025-07-05 23:14:27 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:27.279554 | orchestrator | 2025-07-05 23:14:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:30.316795 | orchestrator | 2025-07-05 23:14:30 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:30.317084 | orchestrator | 2025-07-05 23:14:30 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:30.317918 | orchestrator | 2025-07-05 23:14:30 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:30.318907 | orchestrator | 2025-07-05 23:14:30 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:30.319072 | orchestrator | 2025-07-05 23:14:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:33.344068 | orchestrator | 2025-07-05 23:14:33 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:33.344317 | orchestrator | 2025-07-05 23:14:33 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:33.345207 | orchestrator | 2025-07-05 23:14:33 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:33.346205 | orchestrator | 2025-07-05 23:14:33 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:33.346235 | orchestrator | 2025-07-05 23:14:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:36.371353 | orchestrator | 2025-07-05 23:14:36 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:36.371602 | orchestrator | 2025-07-05 23:14:36 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:36.372122 | orchestrator | 2025-07-05 23:14:36 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:36.376450 | orchestrator | 2025-07-05 23:14:36 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:36.376481 | orchestrator | 2025-07-05 23:14:36 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:39.401982 | orchestrator | 2025-07-05 23:14:39 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:39.403307 | orchestrator | 2025-07-05 23:14:39 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:39.404114 | orchestrator | 2025-07-05 23:14:39 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:39.404798 | orchestrator | 2025-07-05 23:14:39 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:39.404827 | orchestrator | 2025-07-05 23:14:39 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:42.442614 | orchestrator | 2025-07-05 23:14:42 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:42.442704 | orchestrator | 2025-07-05 23:14:42 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:42.444018 | orchestrator | 2025-07-05 23:14:42 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:42.444755 | orchestrator | 2025-07-05 23:14:42 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:42.444869 | orchestrator | 2025-07-05 23:14:42 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:45.466284 | orchestrator | 2025-07-05 23:14:45 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:45.466644 | orchestrator | 2025-07-05 23:14:45 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:45.467807 | orchestrator | 2025-07-05 23:14:45 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:45.469531 | orchestrator | 2025-07-05 23:14:45 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:45.469558 | orchestrator | 2025-07-05 23:14:45 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:48.494878 | orchestrator | 2025-07-05 23:14:48 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:48.495004 | orchestrator | 2025-07-05 23:14:48 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:48.495300 | orchestrator | 2025-07-05 23:14:48 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:48.496162 | orchestrator | 2025-07-05 23:14:48 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:48.496215 | orchestrator | 2025-07-05 23:14:48 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:51.525011 | orchestrator | 2025-07-05 23:14:51 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:51.525407 | orchestrator | 2025-07-05 23:14:51 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:51.527632 | orchestrator | 2025-07-05 23:14:51 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:51.528055 | orchestrator | 2025-07-05 23:14:51 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:51.528164 | orchestrator | 2025-07-05 23:14:51 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:54.565103 | orchestrator | 2025-07-05 23:14:54 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:54.566121 | orchestrator | 2025-07-05 23:14:54 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:54.567826 | orchestrator | 2025-07-05 23:14:54 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:54.568926 | orchestrator | 2025-07-05 23:14:54 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:54.568974 | orchestrator | 2025-07-05 23:14:54 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:14:57.594295 | orchestrator | 2025-07-05 23:14:57 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:14:57.595196 | orchestrator | 2025-07-05 23:14:57 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:14:57.597190 | orchestrator | 2025-07-05 23:14:57 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:14:57.598919 | orchestrator | 2025-07-05 23:14:57 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:14:57.598948 | orchestrator | 2025-07-05 23:14:57 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:00.630273 | orchestrator | 2025-07-05 23:15:00 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state STARTED 2025-07-05 23:15:00.630377 | orchestrator | 2025-07-05 23:15:00 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:00.631125 | orchestrator | 2025-07-05 23:15:00 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:00.631813 | orchestrator | 2025-07-05 23:15:00 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:00.631836 | orchestrator | 2025-07-05 23:15:00 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:03.663884 | orchestrator | 2025-07-05 23:15:03 | INFO  | Task f520a201-e63a-4ba1-83c6-bb202fb776da is in state SUCCESS 2025-07-05 23:15:03.665123 | orchestrator | 2025-07-05 23:15:03.665170 | orchestrator | 2025-07-05 23:15:03.665543 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:15:03.665559 | orchestrator | 2025-07-05 23:15:03.665571 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:15:03.665583 | orchestrator | Saturday 05 July 2025 23:13:09 +0000 (0:00:00.269) 0:00:00.269 ********* 2025-07-05 23:15:03.665595 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:15:03.665608 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:15:03.665620 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:15:03.665631 | orchestrator | 2025-07-05 23:15:03.665642 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:15:03.665680 | orchestrator | Saturday 05 July 2025 23:13:09 +0000 (0:00:00.306) 0:00:00.576 ********* 2025-07-05 23:15:03.665692 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-07-05 23:15:03.665703 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-07-05 23:15:03.665714 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-07-05 23:15:03.665769 | orchestrator | 2025-07-05 23:15:03.665780 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-07-05 23:15:03.665792 | orchestrator | 2025-07-05 23:15:03.665803 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-05 23:15:03.665814 | orchestrator | Saturday 05 July 2025 23:13:09 +0000 (0:00:00.420) 0:00:00.997 ********* 2025-07-05 23:15:03.665826 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:15:03.665838 | orchestrator | 2025-07-05 23:15:03.665849 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-07-05 23:15:03.665860 | orchestrator | Saturday 05 July 2025 23:13:10 +0000 (0:00:00.536) 0:00:01.534 ********* 2025-07-05 23:15:03.665873 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-07-05 23:15:03.665905 | orchestrator | 2025-07-05 23:15:03.665917 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-07-05 23:15:03.665928 | orchestrator | Saturday 05 July 2025 23:13:13 +0000 (0:00:03.375) 0:00:04.909 ********* 2025-07-05 23:15:03.665939 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-07-05 23:15:03.665950 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-07-05 23:15:03.665961 | orchestrator | 2025-07-05 23:15:03.665972 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-07-05 23:15:03.665983 | orchestrator | Saturday 05 July 2025 23:13:20 +0000 (0:00:06.501) 0:00:11.410 ********* 2025-07-05 23:15:03.665995 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-05 23:15:03.666007 | orchestrator | 2025-07-05 23:15:03.666072 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-07-05 23:15:03.666084 | orchestrator | Saturday 05 July 2025 23:13:23 +0000 (0:00:03.294) 0:00:14.705 ********* 2025-07-05 23:15:03.666095 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-05 23:15:03.666107 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-07-05 23:15:03.666118 | orchestrator | 2025-07-05 23:15:03.666129 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-07-05 23:15:03.666157 | orchestrator | Saturday 05 July 2025 23:13:27 +0000 (0:00:03.929) 0:00:18.634 ********* 2025-07-05 23:15:03.666170 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-05 23:15:03.666184 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-07-05 23:15:03.666197 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-07-05 23:15:03.666209 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-07-05 23:15:03.666223 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-07-05 23:15:03.666236 | orchestrator | 2025-07-05 23:15:03.666248 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-07-05 23:15:03.666260 | orchestrator | Saturday 05 July 2025 23:13:44 +0000 (0:00:16.736) 0:00:35.371 ********* 2025-07-05 23:15:03.666273 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-07-05 23:15:03.666285 | orchestrator | 2025-07-05 23:15:03.666297 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-07-05 23:15:03.666310 | orchestrator | Saturday 05 July 2025 23:13:48 +0000 (0:00:04.331) 0:00:39.702 ********* 2025-07-05 23:15:03.666326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.666368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.666383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.666403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666499 | orchestrator | 2025-07-05 23:15:03.666510 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-07-05 23:15:03.666522 | orchestrator | Saturday 05 July 2025 23:13:51 +0000 (0:00:02.763) 0:00:42.466 ********* 2025-07-05 23:15:03.666533 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-07-05 23:15:03.666544 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-07-05 23:15:03.666555 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-07-05 23:15:03.666566 | orchestrator | 2025-07-05 23:15:03.666577 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-07-05 23:15:03.666588 | orchestrator | Saturday 05 July 2025 23:13:52 +0000 (0:00:01.410) 0:00:43.876 ********* 2025-07-05 23:15:03.666599 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:15:03.666610 | orchestrator | 2025-07-05 23:15:03.666621 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-07-05 23:15:03.666632 | orchestrator | Saturday 05 July 2025 23:13:52 +0000 (0:00:00.115) 0:00:43.991 ********* 2025-07-05 23:15:03.666644 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:15:03.666655 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:15:03.666666 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:15:03.666677 | orchestrator | 2025-07-05 23:15:03.666688 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-05 23:15:03.666704 | orchestrator | Saturday 05 July 2025 23:13:53 +0000 (0:00:00.413) 0:00:44.405 ********* 2025-07-05 23:15:03.666738 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:15:03.666759 | orchestrator | 2025-07-05 23:15:03.666770 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-07-05 23:15:03.666782 | orchestrator | Saturday 05 July 2025 23:13:53 +0000 (0:00:00.599) 0:00:45.004 ********* 2025-07-05 23:15:03.666794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.666815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.666828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.666840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.666931 | orchestrator | 2025-07-05 23:15:03.666942 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-07-05 23:15:03.666954 | orchestrator | Saturday 05 July 2025 23:13:57 +0000 (0:00:03.958) 0:00:48.963 ********* 2025-07-05 23:15:03.666965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:15:03.666994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667018 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:15:03.667036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:15:03.667049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667072 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:15:03.667088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:15:03.667107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667131 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:15:03.667142 | orchestrator | 2025-07-05 23:15:03.667153 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-07-05 23:15:03.667165 | orchestrator | Saturday 05 July 2025 23:13:58 +0000 (0:00:00.781) 0:00:49.744 ********* 2025-07-05 23:15:03.667183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:15:03.667196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667225 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:15:03.667248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:15:03.667260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667284 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:15:03.667302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:15:03.667315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.667345 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:15:03.667356 | orchestrator | 2025-07-05 23:15:03.667368 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-07-05 23:15:03.667384 | orchestrator | Saturday 05 July 2025 23:14:00 +0000 (0:00:01.823) 0:00:51.568 ********* 2025-07-05 23:15:03.667396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.667413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.667426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.667444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.667460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.667472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.667484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.667501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.667513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.667531 | orchestrator | 2025-07-05 23:15:03.667549 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-07-05 23:15:03.667577 | orchestrator | Saturday 05 July 2025 23:14:04 +0000 (0:00:03.509) 0:00:55.077 ********* 2025-07-05 23:15:03.667595 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:15:03.667615 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:15:03.667634 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:15:03.667653 | orchestrator | 2025-07-05 23:15:03.667672 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-07-05 23:15:03.667691 | orchestrator | Saturday 05 July 2025 23:14:06 +0000 (0:00:02.500) 0:00:57.577 ********* 2025-07-05 23:15:03.667702 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 23:15:03.667713 | orchestrator | 2025-07-05 23:15:03.667784 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-07-05 23:15:03.667796 | orchestrator | Saturday 05 July 2025 23:14:07 +0000 (0:00:00.776) 0:00:58.354 ********* 2025-07-05 23:15:03.667807 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:15:03.667819 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:15:03.667830 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:15:03.667841 | orchestrator | 2025-07-05 23:15:03.667852 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-07-05 23:15:03.667864 | orchestrator | Saturday 05 July 2025 23:14:07 +0000 (0:00:00.480) 0:00:58.835 ********* 2025-07-05 23:15:03.667882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.667895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.667916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.667936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.667948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.667960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.667977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.667989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.668000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.668011 | orchestrator | 2025-07-05 23:15:03.668023 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-07-05 23:15:03.668040 | orchestrator | Saturday 05 July 2025 23:14:18 +0000 (0:00:10.531) 0:01:09.367 ********* 2025-07-05 23:15:03.668059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:15:03.668076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.668095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.668113 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:15:03.668140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:15:03.668161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.668188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.668200 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:15:03.668212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-05 23:15:03.668223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.668240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:15:03.668251 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:15:03.668263 | orchestrator | 2025-07-05 23:15:03.668274 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-07-05 23:15:03.668285 | orchestrator | Saturday 05 July 2025 23:14:19 +0000 (0:00:00.666) 0:01:10.034 ********* 2025-07-05 23:15:03.668296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.668318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.668329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-05 23:15:03.668339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.668354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.668364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.668375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.668402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.668413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:15:03.668423 | orchestrator | 2025-07-05 23:15:03.668433 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-05 23:15:03.668443 | orchestrator | Saturday 05 July 2025 23:14:21 +0000 (0:00:02.838) 0:01:12.872 ********* 2025-07-05 23:15:03.668453 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:15:03.668463 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:15:03.668473 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:15:03.668483 | orchestrator | 2025-07-05 23:15:03.668492 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-07-05 23:15:03.668503 | orchestrator | Saturday 05 July 2025 23:14:22 +0000 (0:00:00.217) 0:01:13.089 ********* 2025-07-05 23:15:03.668512 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:15:03.668527 | orchestrator | 2025-07-05 23:15:03.668543 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-07-05 23:15:03.668559 | orchestrator | Saturday 05 July 2025 23:14:24 +0000 (0:00:02.132) 0:01:15.222 ********* 2025-07-05 23:15:03.668574 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:15:03.668589 | orchestrator | 2025-07-05 23:15:03.668605 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-07-05 23:15:03.668621 | orchestrator | Saturday 05 July 2025 23:14:26 +0000 (0:00:02.368) 0:01:17.590 ********* 2025-07-05 23:15:03.668636 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:15:03.668652 | orchestrator | 2025-07-05 23:15:03.668667 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-05 23:15:03.668685 | orchestrator | Saturday 05 July 2025 23:14:38 +0000 (0:00:11.915) 0:01:29.506 ********* 2025-07-05 23:15:03.668702 | orchestrator | 2025-07-05 23:15:03.668741 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-05 23:15:03.668753 | orchestrator | Saturday 05 July 2025 23:14:38 +0000 (0:00:00.083) 0:01:29.590 ********* 2025-07-05 23:15:03.668762 | orchestrator | 2025-07-05 23:15:03.668772 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-05 23:15:03.668788 | orchestrator | Saturday 05 July 2025 23:14:38 +0000 (0:00:00.115) 0:01:29.705 ********* 2025-07-05 23:15:03.668799 | orchestrator | 2025-07-05 23:15:03.668809 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-07-05 23:15:03.668826 | orchestrator | Saturday 05 July 2025 23:14:38 +0000 (0:00:00.144) 0:01:29.850 ********* 2025-07-05 23:15:03.668836 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:15:03.668846 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:15:03.668856 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:15:03.668866 | orchestrator | 2025-07-05 23:15:03.668876 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-07-05 23:15:03.668885 | orchestrator | Saturday 05 July 2025 23:14:46 +0000 (0:00:07.243) 0:01:37.093 ********* 2025-07-05 23:15:03.668895 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:15:03.668905 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:15:03.668915 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:15:03.668925 | orchestrator | 2025-07-05 23:15:03.668934 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-07-05 23:15:03.668944 | orchestrator | Saturday 05 July 2025 23:14:50 +0000 (0:00:04.885) 0:01:41.979 ********* 2025-07-05 23:15:03.668954 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:15:03.668964 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:15:03.668973 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:15:03.668983 | orchestrator | 2025-07-05 23:15:03.668993 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:15:03.669005 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-05 23:15:03.669016 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 23:15:03.669026 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 23:15:03.669036 | orchestrator | 2025-07-05 23:15:03.669046 | orchestrator | 2025-07-05 23:15:03.669056 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:15:03.669066 | orchestrator | Saturday 05 July 2025 23:15:02 +0000 (0:00:11.627) 0:01:53.607 ********* 2025-07-05 23:15:03.669075 | orchestrator | =============================================================================== 2025-07-05 23:15:03.669085 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.74s 2025-07-05 23:15:03.669102 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.92s 2025-07-05 23:15:03.669112 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.63s 2025-07-05 23:15:03.669122 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.53s 2025-07-05 23:15:03.669132 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.24s 2025-07-05 23:15:03.669141 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.50s 2025-07-05 23:15:03.669151 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.89s 2025-07-05 23:15:03.669161 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.33s 2025-07-05 23:15:03.669171 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.96s 2025-07-05 23:15:03.669181 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.93s 2025-07-05 23:15:03.669190 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.51s 2025-07-05 23:15:03.669200 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.38s 2025-07-05 23:15:03.669210 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.29s 2025-07-05 23:15:03.669220 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.84s 2025-07-05 23:15:03.669229 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.76s 2025-07-05 23:15:03.669239 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.50s 2025-07-05 23:15:03.669249 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.37s 2025-07-05 23:15:03.669265 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.13s 2025-07-05 23:15:03.669275 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.82s 2025-07-05 23:15:03.669284 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.41s 2025-07-05 23:15:03.669294 | orchestrator | 2025-07-05 23:15:03 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:03.669478 | orchestrator | 2025-07-05 23:15:03 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:03.670066 | orchestrator | 2025-07-05 23:15:03 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:03.670085 | orchestrator | 2025-07-05 23:15:03 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:06.694964 | orchestrator | 2025-07-05 23:15:06 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:06.695337 | orchestrator | 2025-07-05 23:15:06 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:06.695997 | orchestrator | 2025-07-05 23:15:06 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:06.696584 | orchestrator | 2025-07-05 23:15:06 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:06.696607 | orchestrator | 2025-07-05 23:15:06 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:09.729375 | orchestrator | 2025-07-05 23:15:09 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:09.733259 | orchestrator | 2025-07-05 23:15:09 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:09.733653 | orchestrator | 2025-07-05 23:15:09 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:09.734288 | orchestrator | 2025-07-05 23:15:09 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:09.734323 | orchestrator | 2025-07-05 23:15:09 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:12.753282 | orchestrator | 2025-07-05 23:15:12 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:12.753898 | orchestrator | 2025-07-05 23:15:12 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:12.754380 | orchestrator | 2025-07-05 23:15:12 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:12.755252 | orchestrator | 2025-07-05 23:15:12 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:12.755281 | orchestrator | 2025-07-05 23:15:12 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:15.782458 | orchestrator | 2025-07-05 23:15:15 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:15.782579 | orchestrator | 2025-07-05 23:15:15 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:15.783101 | orchestrator | 2025-07-05 23:15:15 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:15.783668 | orchestrator | 2025-07-05 23:15:15 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:15.783693 | orchestrator | 2025-07-05 23:15:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:18.810641 | orchestrator | 2025-07-05 23:15:18 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:18.810796 | orchestrator | 2025-07-05 23:15:18 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:18.812014 | orchestrator | 2025-07-05 23:15:18 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:18.812386 | orchestrator | 2025-07-05 23:15:18 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:18.812500 | orchestrator | 2025-07-05 23:15:18 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:21.842107 | orchestrator | 2025-07-05 23:15:21 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:21.842236 | orchestrator | 2025-07-05 23:15:21 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:21.843064 | orchestrator | 2025-07-05 23:15:21 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:21.843585 | orchestrator | 2025-07-05 23:15:21 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:21.843616 | orchestrator | 2025-07-05 23:15:21 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:24.869299 | orchestrator | 2025-07-05 23:15:24 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:24.870820 | orchestrator | 2025-07-05 23:15:24 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:24.870851 | orchestrator | 2025-07-05 23:15:24 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:24.871304 | orchestrator | 2025-07-05 23:15:24 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:24.871334 | orchestrator | 2025-07-05 23:15:24 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:27.894934 | orchestrator | 2025-07-05 23:15:27 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:27.895908 | orchestrator | 2025-07-05 23:15:27 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:27.895946 | orchestrator | 2025-07-05 23:15:27 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:27.897301 | orchestrator | 2025-07-05 23:15:27 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:27.897325 | orchestrator | 2025-07-05 23:15:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:30.920474 | orchestrator | 2025-07-05 23:15:30 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:30.921070 | orchestrator | 2025-07-05 23:15:30 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:30.923038 | orchestrator | 2025-07-05 23:15:30 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:30.923355 | orchestrator | 2025-07-05 23:15:30 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:30.923389 | orchestrator | 2025-07-05 23:15:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:33.954491 | orchestrator | 2025-07-05 23:15:33 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:33.954999 | orchestrator | 2025-07-05 23:15:33 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:33.955827 | orchestrator | 2025-07-05 23:15:33 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:33.956840 | orchestrator | 2025-07-05 23:15:33 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:33.956869 | orchestrator | 2025-07-05 23:15:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:37.001841 | orchestrator | 2025-07-05 23:15:36 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:37.007142 | orchestrator | 2025-07-05 23:15:37 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:37.008813 | orchestrator | 2025-07-05 23:15:37 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:37.009772 | orchestrator | 2025-07-05 23:15:37 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:37.009798 | orchestrator | 2025-07-05 23:15:37 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:40.056283 | orchestrator | 2025-07-05 23:15:40 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:40.058264 | orchestrator | 2025-07-05 23:15:40 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:40.060788 | orchestrator | 2025-07-05 23:15:40 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:40.061773 | orchestrator | 2025-07-05 23:15:40 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:40.062115 | orchestrator | 2025-07-05 23:15:40 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:43.101918 | orchestrator | 2025-07-05 23:15:43 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:43.102078 | orchestrator | 2025-07-05 23:15:43 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:43.103880 | orchestrator | 2025-07-05 23:15:43 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:43.105202 | orchestrator | 2025-07-05 23:15:43 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:43.105249 | orchestrator | 2025-07-05 23:15:43 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:46.149872 | orchestrator | 2025-07-05 23:15:46 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:46.152569 | orchestrator | 2025-07-05 23:15:46 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:46.153967 | orchestrator | 2025-07-05 23:15:46 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state STARTED 2025-07-05 23:15:46.154994 | orchestrator | 2025-07-05 23:15:46 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:46.155275 | orchestrator | 2025-07-05 23:15:46 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:49.199244 | orchestrator | 2025-07-05 23:15:49 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:49.201667 | orchestrator | 2025-07-05 23:15:49 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:49.202850 | orchestrator | 2025-07-05 23:15:49 | INFO  | Task 6d6009cb-0136-427c-a363-3c81f5f7c603 is in state SUCCESS 2025-07-05 23:15:49.206451 | orchestrator | 2025-07-05 23:15:49 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:15:49.207680 | orchestrator | 2025-07-05 23:15:49 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:49.208270 | orchestrator | 2025-07-05 23:15:49 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:52.250957 | orchestrator | 2025-07-05 23:15:52 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:52.251093 | orchestrator | 2025-07-05 23:15:52 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:52.251298 | orchestrator | 2025-07-05 23:15:52 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:15:52.252056 | orchestrator | 2025-07-05 23:15:52 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:52.252084 | orchestrator | 2025-07-05 23:15:52 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:55.290464 | orchestrator | 2025-07-05 23:15:55 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:55.291077 | orchestrator | 2025-07-05 23:15:55 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:55.292345 | orchestrator | 2025-07-05 23:15:55 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:15:55.293429 | orchestrator | 2025-07-05 23:15:55 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:55.293654 | orchestrator | 2025-07-05 23:15:55 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:15:58.329769 | orchestrator | 2025-07-05 23:15:58 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:15:58.331039 | orchestrator | 2025-07-05 23:15:58 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:15:58.331094 | orchestrator | 2025-07-05 23:15:58 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:15:58.331681 | orchestrator | 2025-07-05 23:15:58 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:15:58.332226 | orchestrator | 2025-07-05 23:15:58 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:01.386860 | orchestrator | 2025-07-05 23:16:01 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:01.387144 | orchestrator | 2025-07-05 23:16:01 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:01.388397 | orchestrator | 2025-07-05 23:16:01 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:01.389065 | orchestrator | 2025-07-05 23:16:01 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:01.389183 | orchestrator | 2025-07-05 23:16:01 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:04.428234 | orchestrator | 2025-07-05 23:16:04 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:04.428469 | orchestrator | 2025-07-05 23:16:04 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:04.429110 | orchestrator | 2025-07-05 23:16:04 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:04.429923 | orchestrator | 2025-07-05 23:16:04 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:04.429949 | orchestrator | 2025-07-05 23:16:04 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:07.514086 | orchestrator | 2025-07-05 23:16:07 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:07.515866 | orchestrator | 2025-07-05 23:16:07 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:07.517958 | orchestrator | 2025-07-05 23:16:07 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:07.520203 | orchestrator | 2025-07-05 23:16:07 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:07.520388 | orchestrator | 2025-07-05 23:16:07 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:10.553364 | orchestrator | 2025-07-05 23:16:10 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:10.554175 | orchestrator | 2025-07-05 23:16:10 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:10.554537 | orchestrator | 2025-07-05 23:16:10 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:10.555398 | orchestrator | 2025-07-05 23:16:10 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:10.555429 | orchestrator | 2025-07-05 23:16:10 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:13.586570 | orchestrator | 2025-07-05 23:16:13 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:13.588149 | orchestrator | 2025-07-05 23:16:13 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:13.589136 | orchestrator | 2025-07-05 23:16:13 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:13.590013 | orchestrator | 2025-07-05 23:16:13 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:13.590206 | orchestrator | 2025-07-05 23:16:13 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:16.621954 | orchestrator | 2025-07-05 23:16:16 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:16.622162 | orchestrator | 2025-07-05 23:16:16 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:16.624066 | orchestrator | 2025-07-05 23:16:16 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:16.624084 | orchestrator | 2025-07-05 23:16:16 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:16.624092 | orchestrator | 2025-07-05 23:16:16 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:19.651963 | orchestrator | 2025-07-05 23:16:19 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:19.652067 | orchestrator | 2025-07-05 23:16:19 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:19.652848 | orchestrator | 2025-07-05 23:16:19 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:19.654124 | orchestrator | 2025-07-05 23:16:19 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:19.654164 | orchestrator | 2025-07-05 23:16:19 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:22.674842 | orchestrator | 2025-07-05 23:16:22 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:22.674937 | orchestrator | 2025-07-05 23:16:22 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:22.675345 | orchestrator | 2025-07-05 23:16:22 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:22.675923 | orchestrator | 2025-07-05 23:16:22 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:22.675952 | orchestrator | 2025-07-05 23:16:22 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:25.706615 | orchestrator | 2025-07-05 23:16:25 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:25.706888 | orchestrator | 2025-07-05 23:16:25 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:25.707505 | orchestrator | 2025-07-05 23:16:25 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:25.708962 | orchestrator | 2025-07-05 23:16:25 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:25.709001 | orchestrator | 2025-07-05 23:16:25 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:28.746953 | orchestrator | 2025-07-05 23:16:28 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:28.749757 | orchestrator | 2025-07-05 23:16:28 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:28.750657 | orchestrator | 2025-07-05 23:16:28 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:28.751529 | orchestrator | 2025-07-05 23:16:28 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:28.751552 | orchestrator | 2025-07-05 23:16:28 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:31.782261 | orchestrator | 2025-07-05 23:16:31 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:31.783807 | orchestrator | 2025-07-05 23:16:31 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:31.785386 | orchestrator | 2025-07-05 23:16:31 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:31.786973 | orchestrator | 2025-07-05 23:16:31 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:31.787170 | orchestrator | 2025-07-05 23:16:31 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:34.822170 | orchestrator | 2025-07-05 23:16:34 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:34.823831 | orchestrator | 2025-07-05 23:16:34 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:34.823994 | orchestrator | 2025-07-05 23:16:34 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:34.824779 | orchestrator | 2025-07-05 23:16:34 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:34.824808 | orchestrator | 2025-07-05 23:16:34 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:37.853201 | orchestrator | 2025-07-05 23:16:37 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:37.853854 | orchestrator | 2025-07-05 23:16:37 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:37.854250 | orchestrator | 2025-07-05 23:16:37 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:37.855082 | orchestrator | 2025-07-05 23:16:37 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:37.855168 | orchestrator | 2025-07-05 23:16:37 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:40.907764 | orchestrator | 2025-07-05 23:16:40 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:40.909947 | orchestrator | 2025-07-05 23:16:40 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:40.911356 | orchestrator | 2025-07-05 23:16:40 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:40.913465 | orchestrator | 2025-07-05 23:16:40 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:40.913919 | orchestrator | 2025-07-05 23:16:40 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:43.955318 | orchestrator | 2025-07-05 23:16:43 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:43.956206 | orchestrator | 2025-07-05 23:16:43 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:43.958332 | orchestrator | 2025-07-05 23:16:43 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:43.958411 | orchestrator | 2025-07-05 23:16:43 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:43.958528 | orchestrator | 2025-07-05 23:16:43 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:47.011211 | orchestrator | 2025-07-05 23:16:47 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:47.013022 | orchestrator | 2025-07-05 23:16:47 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:47.013761 | orchestrator | 2025-07-05 23:16:47 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:47.015155 | orchestrator | 2025-07-05 23:16:47 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:47.015192 | orchestrator | 2025-07-05 23:16:47 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:50.057412 | orchestrator | 2025-07-05 23:16:50 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:50.058662 | orchestrator | 2025-07-05 23:16:50 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:50.059658 | orchestrator | 2025-07-05 23:16:50 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:50.061295 | orchestrator | 2025-07-05 23:16:50 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:50.061330 | orchestrator | 2025-07-05 23:16:50 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:53.092378 | orchestrator | 2025-07-05 23:16:53 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:53.093044 | orchestrator | 2025-07-05 23:16:53 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:53.093968 | orchestrator | 2025-07-05 23:16:53 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:53.094660 | orchestrator | 2025-07-05 23:16:53 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:53.094731 | orchestrator | 2025-07-05 23:16:53 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:56.121653 | orchestrator | 2025-07-05 23:16:56 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:56.122130 | orchestrator | 2025-07-05 23:16:56 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:56.123071 | orchestrator | 2025-07-05 23:16:56 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:56.124094 | orchestrator | 2025-07-05 23:16:56 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:56.124276 | orchestrator | 2025-07-05 23:16:56 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:16:59.154920 | orchestrator | 2025-07-05 23:16:59 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:16:59.156099 | orchestrator | 2025-07-05 23:16:59 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:16:59.157721 | orchestrator | 2025-07-05 23:16:59 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:16:59.158993 | orchestrator | 2025-07-05 23:16:59 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:16:59.159224 | orchestrator | 2025-07-05 23:16:59 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:02.205064 | orchestrator | 2025-07-05 23:17:02 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:02.206238 | orchestrator | 2025-07-05 23:17:02 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:17:02.207908 | orchestrator | 2025-07-05 23:17:02 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state STARTED 2025-07-05 23:17:02.209665 | orchestrator | 2025-07-05 23:17:02 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:17:02.209722 | orchestrator | 2025-07-05 23:17:02 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:05.247386 | orchestrator | 2025-07-05 23:17:05 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:05.249498 | orchestrator | 2025-07-05 23:17:05 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:17:05.252179 | orchestrator | 2025-07-05 23:17:05 | INFO  | Task 2ee08d21-53ee-494c-94a7-c678e3a42273 is in state SUCCESS 2025-07-05 23:17:05.255843 | orchestrator | 2025-07-05 23:17:05.255889 | orchestrator | 2025-07-05 23:17:05.255902 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-07-05 23:17:05.255914 | orchestrator | 2025-07-05 23:17:05.255926 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-07-05 23:17:05.255937 | orchestrator | Saturday 05 July 2025 23:15:10 +0000 (0:00:00.088) 0:00:00.088 ********* 2025-07-05 23:17:05.255949 | orchestrator | changed: [localhost] 2025-07-05 23:17:05.255961 | orchestrator | 2025-07-05 23:17:05.255973 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-07-05 23:17:05.255984 | orchestrator | Saturday 05 July 2025 23:15:11 +0000 (0:00:01.236) 0:00:01.324 ********* 2025-07-05 23:17:05.255996 | orchestrator | changed: [localhost] 2025-07-05 23:17:05.256007 | orchestrator | 2025-07-05 23:17:05.256018 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-07-05 23:17:05.256029 | orchestrator | Saturday 05 July 2025 23:15:42 +0000 (0:00:30.607) 0:00:31.932 ********* 2025-07-05 23:17:05.256040 | orchestrator | changed: [localhost] 2025-07-05 23:17:05.256052 | orchestrator | 2025-07-05 23:17:05.256063 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:17:05.256074 | orchestrator | 2025-07-05 23:17:05.256085 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:17:05.256096 | orchestrator | Saturday 05 July 2025 23:15:46 +0000 (0:00:04.130) 0:00:36.063 ********* 2025-07-05 23:17:05.256107 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:17:05.256118 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:17:05.256129 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:17:05.256140 | orchestrator | 2025-07-05 23:17:05.256151 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:17:05.256163 | orchestrator | Saturday 05 July 2025 23:15:46 +0000 (0:00:00.292) 0:00:36.355 ********* 2025-07-05 23:17:05.256174 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-07-05 23:17:05.256186 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-07-05 23:17:05.256197 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-07-05 23:17:05.256208 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-07-05 23:17:05.256219 | orchestrator | 2025-07-05 23:17:05.256230 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-07-05 23:17:05.256242 | orchestrator | skipping: no hosts matched 2025-07-05 23:17:05.256254 | orchestrator | 2025-07-05 23:17:05.256265 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:17:05.256276 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:17:05.256290 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:17:05.256321 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:17:05.256356 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:17:05.256368 | orchestrator | 2025-07-05 23:17:05.256379 | orchestrator | 2025-07-05 23:17:05.256391 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:17:05.256402 | orchestrator | Saturday 05 July 2025 23:15:47 +0000 (0:00:00.402) 0:00:36.758 ********* 2025-07-05 23:17:05.256413 | orchestrator | =============================================================================== 2025-07-05 23:17:05.256424 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 30.61s 2025-07-05 23:17:05.256435 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.13s 2025-07-05 23:17:05.256446 | orchestrator | Ensure the destination directory exists --------------------------------- 1.24s 2025-07-05 23:17:05.256457 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-07-05 23:17:05.256469 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-07-05 23:17:05.256480 | orchestrator | 2025-07-05 23:17:05.256491 | orchestrator | 2025-07-05 23:17:05.256502 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:17:05.256513 | orchestrator | 2025-07-05 23:17:05.256524 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:17:05.256534 | orchestrator | Saturday 05 July 2025 23:15:51 +0000 (0:00:00.290) 0:00:00.290 ********* 2025-07-05 23:17:05.256545 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:17:05.256556 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:17:05.256568 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:17:05.256579 | orchestrator | 2025-07-05 23:17:05.256590 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:17:05.256601 | orchestrator | Saturday 05 July 2025 23:15:51 +0000 (0:00:00.399) 0:00:00.690 ********* 2025-07-05 23:17:05.256612 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-07-05 23:17:05.256623 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-07-05 23:17:05.256634 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-07-05 23:17:05.256645 | orchestrator | 2025-07-05 23:17:05.256656 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-07-05 23:17:05.256667 | orchestrator | 2025-07-05 23:17:05.256826 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-05 23:17:05.256843 | orchestrator | Saturday 05 July 2025 23:15:52 +0000 (0:00:00.455) 0:00:01.146 ********* 2025-07-05 23:17:05.256854 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:17:05.256916 | orchestrator | 2025-07-05 23:17:05.256928 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-07-05 23:17:05.256939 | orchestrator | Saturday 05 July 2025 23:15:53 +0000 (0:00:00.826) 0:00:01.972 ********* 2025-07-05 23:17:05.256965 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-07-05 23:17:05.256977 | orchestrator | 2025-07-05 23:17:05.256988 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-07-05 23:17:05.256999 | orchestrator | Saturday 05 July 2025 23:15:57 +0000 (0:00:03.959) 0:00:05.932 ********* 2025-07-05 23:17:05.257010 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-07-05 23:17:05.257022 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-07-05 23:17:05.257033 | orchestrator | 2025-07-05 23:17:05.257044 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-07-05 23:17:05.257056 | orchestrator | Saturday 05 July 2025 23:16:03 +0000 (0:00:06.945) 0:00:12.877 ********* 2025-07-05 23:17:05.257067 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-05 23:17:05.257078 | orchestrator | 2025-07-05 23:17:05.257089 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-07-05 23:17:05.257112 | orchestrator | Saturday 05 July 2025 23:16:07 +0000 (0:00:03.387) 0:00:16.264 ********* 2025-07-05 23:17:05.257175 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-05 23:17:05.257188 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-07-05 23:17:05.257199 | orchestrator | 2025-07-05 23:17:05.257210 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-07-05 23:17:05.257221 | orchestrator | Saturday 05 July 2025 23:16:11 +0000 (0:00:03.824) 0:00:20.089 ********* 2025-07-05 23:17:05.257233 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-05 23:17:05.257244 | orchestrator | 2025-07-05 23:17:05.257255 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-07-05 23:17:05.257266 | orchestrator | Saturday 05 July 2025 23:16:14 +0000 (0:00:03.197) 0:00:23.287 ********* 2025-07-05 23:17:05.257277 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-07-05 23:17:05.257288 | orchestrator | 2025-07-05 23:17:05.257299 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-05 23:17:05.257310 | orchestrator | Saturday 05 July 2025 23:16:18 +0000 (0:00:04.488) 0:00:27.775 ********* 2025-07-05 23:17:05.257321 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:05.257332 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:05.257343 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:05.257354 | orchestrator | 2025-07-05 23:17:05.257365 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-07-05 23:17:05.257376 | orchestrator | Saturday 05 July 2025 23:16:19 +0000 (0:00:00.399) 0:00:28.175 ********* 2025-07-05 23:17:05.257399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.257416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.257438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.257458 | orchestrator | 2025-07-05 23:17:05.257470 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-07-05 23:17:05.257481 | orchestrator | Saturday 05 July 2025 23:16:20 +0000 (0:00:01.163) 0:00:29.339 ********* 2025-07-05 23:17:05.257491 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:05.257503 | orchestrator | 2025-07-05 23:17:05.257513 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-07-05 23:17:05.257525 | orchestrator | Saturday 05 July 2025 23:16:20 +0000 (0:00:00.147) 0:00:29.486 ********* 2025-07-05 23:17:05.257536 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:05.257547 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:05.257557 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:05.257568 | orchestrator | 2025-07-05 23:17:05.257579 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-05 23:17:05.257590 | orchestrator | Saturday 05 July 2025 23:16:21 +0000 (0:00:00.670) 0:00:30.156 ********* 2025-07-05 23:17:05.257602 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:17:05.257613 | orchestrator | 2025-07-05 23:17:05.257624 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-07-05 23:17:05.257635 | orchestrator | Saturday 05 July 2025 23:16:21 +0000 (0:00:00.637) 0:00:30.794 ********* 2025-07-05 23:17:05.257652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.257665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.257702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.257734 | orchestrator | 2025-07-05 23:17:05.257762 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-07-05 23:17:05.257780 | orchestrator | Saturday 05 July 2025 23:16:23 +0000 (0:00:01.963) 0:00:32.758 ********* 2025-07-05 23:17:05.257793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:17:05.257805 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:05.257822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:17:05.257834 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:05.257846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:17:05.257858 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:05.257869 | orchestrator | 2025-07-05 23:17:05.257880 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-07-05 23:17:05.257891 | orchestrator | Saturday 05 July 2025 23:16:25 +0000 (0:00:01.787) 0:00:34.545 ********* 2025-07-05 23:17:05.257903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:17:05.257922 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:05.257941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:17:05.257953 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:05.257965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:17:05.257976 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:05.257987 | orchestrator | 2025-07-05 23:17:05.257998 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-07-05 23:17:05.258014 | orchestrator | Saturday 05 July 2025 23:16:26 +0000 (0:00:01.263) 0:00:35.808 ********* 2025-07-05 23:17:05.258085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.258105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.258128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.258140 | orchestrator | 2025-07-05 23:17:05.258151 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-07-05 23:17:05.258163 | orchestrator | Saturday 05 July 2025 23:16:28 +0000 (0:00:01.554) 0:00:37.363 ********* 2025-07-05 23:17:05.258174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.258191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.258204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.258229 | orchestrator | 2025-07-05 23:17:05.258240 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-07-05 23:17:05.258252 | orchestrator | Saturday 05 July 2025 23:16:31 +0000 (0:00:02.688) 0:00:40.051 ********* 2025-07-05 23:17:05.258263 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-05 23:17:05.258274 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-05 23:17:05.258285 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-05 23:17:05.258296 | orchestrator | 2025-07-05 23:17:05.258307 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-07-05 23:17:05.258325 | orchestrator | Saturday 05 July 2025 23:16:32 +0000 (0:00:01.366) 0:00:41.418 ********* 2025-07-05 23:17:05.258337 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:05.258348 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:17:05.258359 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:17:05.258370 | orchestrator | 2025-07-05 23:17:05.258381 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-07-05 23:17:05.258392 | orchestrator | Saturday 05 July 2025 23:16:33 +0000 (0:00:01.320) 0:00:42.739 ********* 2025-07-05 23:17:05.258403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:17:05.258422 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:05.258448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:17:05.258478 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:05.258497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-05 23:17:05.258514 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:05.258531 | orchestrator | 2025-07-05 23:17:05.258551 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-07-05 23:17:05.258569 | orchestrator | Saturday 05 July 2025 23:16:34 +0000 (0:00:00.541) 0:00:43.280 ********* 2025-07-05 23:17:05.258598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.258611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.258623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-05 23:17:05.258643 | orchestrator | 2025-07-05 23:17:05.258654 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-07-05 23:17:05.258671 | orchestrator | Saturday 05 July 2025 23:16:36 +0000 (0:00:01.909) 0:00:45.190 ********* 2025-07-05 23:17:05.258713 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:05.258725 | orchestrator | 2025-07-05 23:17:05.258736 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-07-05 23:17:05.258748 | orchestrator | Saturday 05 July 2025 23:16:38 +0000 (0:00:02.370) 0:00:47.560 ********* 2025-07-05 23:17:05.258759 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:05.258770 | orchestrator | 2025-07-05 23:17:05.258781 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-07-05 23:17:05.258792 | orchestrator | Saturday 05 July 2025 23:16:41 +0000 (0:00:02.360) 0:00:49.920 ********* 2025-07-05 23:17:05.258803 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:05.258814 | orchestrator | 2025-07-05 23:17:05.258825 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-05 23:17:05.258837 | orchestrator | Saturday 05 July 2025 23:16:53 +0000 (0:00:12.948) 0:01:02.868 ********* 2025-07-05 23:17:05.258848 | orchestrator | 2025-07-05 23:17:05.258859 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-05 23:17:05.258870 | orchestrator | Saturday 05 July 2025 23:16:54 +0000 (0:00:00.064) 0:01:02.932 ********* 2025-07-05 23:17:05.258881 | orchestrator | 2025-07-05 23:17:05.258892 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-05 23:17:05.258903 | orchestrator | Saturday 05 July 2025 23:16:54 +0000 (0:00:00.066) 0:01:02.999 ********* 2025-07-05 23:17:05.258914 | orchestrator | 2025-07-05 23:17:05.258925 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-07-05 23:17:05.258936 | orchestrator | Saturday 05 July 2025 23:16:54 +0000 (0:00:00.060) 0:01:03.059 ********* 2025-07-05 23:17:05.258947 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:05.258958 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:17:05.258969 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:17:05.258980 | orchestrator | 2025-07-05 23:17:05.258991 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:17:05.259003 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 23:17:05.259015 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-05 23:17:05.259026 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-05 23:17:05.259038 | orchestrator | 2025-07-05 23:17:05.259049 | orchestrator | 2025-07-05 23:17:05.259060 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:17:05.259071 | orchestrator | Saturday 05 July 2025 23:17:04 +0000 (0:00:10.703) 0:01:13.763 ********* 2025-07-05 23:17:05.259082 | orchestrator | =============================================================================== 2025-07-05 23:17:05.259098 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.95s 2025-07-05 23:17:05.259110 | orchestrator | placement : Restart placement-api container ---------------------------- 10.70s 2025-07-05 23:17:05.259121 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.95s 2025-07-05 23:17:05.259132 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.49s 2025-07-05 23:17:05.259143 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.96s 2025-07-05 23:17:05.259154 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.82s 2025-07-05 23:17:05.259165 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.39s 2025-07-05 23:17:05.259176 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.20s 2025-07-05 23:17:05.259194 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.69s 2025-07-05 23:17:05.259205 | orchestrator | placement : Creating placement databases -------------------------------- 2.37s 2025-07-05 23:17:05.259216 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.36s 2025-07-05 23:17:05.259227 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.96s 2025-07-05 23:17:05.259238 | orchestrator | placement : Check placement containers ---------------------------------- 1.91s 2025-07-05 23:17:05.259249 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.79s 2025-07-05 23:17:05.259260 | orchestrator | placement : Copying over config.json files for services ----------------- 1.55s 2025-07-05 23:17:05.259272 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.37s 2025-07-05 23:17:05.259283 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.32s 2025-07-05 23:17:05.259294 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.26s 2025-07-05 23:17:05.259304 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.16s 2025-07-05 23:17:05.259315 | orchestrator | placement : include_tasks ----------------------------------------------- 0.83s 2025-07-05 23:17:05.259327 | orchestrator | 2025-07-05 23:17:05 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:17:05.259338 | orchestrator | 2025-07-05 23:17:05 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:08.299972 | orchestrator | 2025-07-05 23:17:08 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:08.302887 | orchestrator | 2025-07-05 23:17:08 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:17:08.305494 | orchestrator | 2025-07-05 23:17:08 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:08.307716 | orchestrator | 2025-07-05 23:17:08 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state STARTED 2025-07-05 23:17:08.308183 | orchestrator | 2025-07-05 23:17:08 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:11.353209 | orchestrator | 2025-07-05 23:17:11 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:11.354948 | orchestrator | 2025-07-05 23:17:11 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:17:11.356455 | orchestrator | 2025-07-05 23:17:11 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:11.359848 | orchestrator | 2025-07-05 23:17:11 | INFO  | Task 1275c189-3e41-47e8-bda9-6c15b58b39fa is in state SUCCESS 2025-07-05 23:17:11.362381 | orchestrator | 2025-07-05 23:17:11.362430 | orchestrator | 2025-07-05 23:17:11.362443 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:17:11.362457 | orchestrator | 2025-07-05 23:17:11.362466 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:17:11.362474 | orchestrator | Saturday 05 July 2025 23:14:03 +0000 (0:00:00.640) 0:00:00.640 ********* 2025-07-05 23:17:11.362481 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:17:11.362489 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:17:11.362496 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:17:11.362503 | orchestrator | 2025-07-05 23:17:11.362510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:17:11.362517 | orchestrator | Saturday 05 July 2025 23:14:03 +0000 (0:00:00.392) 0:00:01.032 ********* 2025-07-05 23:17:11.362525 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-07-05 23:17:11.362532 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-07-05 23:17:11.362539 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-07-05 23:17:11.362566 | orchestrator | 2025-07-05 23:17:11.362573 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-07-05 23:17:11.362580 | orchestrator | 2025-07-05 23:17:11.362587 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-05 23:17:11.362594 | orchestrator | Saturday 05 July 2025 23:14:04 +0000 (0:00:00.519) 0:00:01.552 ********* 2025-07-05 23:17:11.362601 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:17:11.362612 | orchestrator | 2025-07-05 23:17:11.362623 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-07-05 23:17:11.362633 | orchestrator | Saturday 05 July 2025 23:14:05 +0000 (0:00:01.036) 0:00:02.588 ********* 2025-07-05 23:17:11.362650 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-07-05 23:17:11.362661 | orchestrator | 2025-07-05 23:17:11.362710 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-07-05 23:17:11.362723 | orchestrator | Saturday 05 July 2025 23:14:09 +0000 (0:00:03.624) 0:00:06.212 ********* 2025-07-05 23:17:11.362735 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-07-05 23:17:11.362747 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-07-05 23:17:11.362758 | orchestrator | 2025-07-05 23:17:11.362766 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-07-05 23:17:11.362773 | orchestrator | Saturday 05 July 2025 23:14:15 +0000 (0:00:06.590) 0:00:12.802 ********* 2025-07-05 23:17:11.362780 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-05 23:17:11.362787 | orchestrator | 2025-07-05 23:17:11.362794 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-07-05 23:17:11.362801 | orchestrator | Saturday 05 July 2025 23:14:18 +0000 (0:00:03.136) 0:00:15.939 ********* 2025-07-05 23:17:11.362808 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-05 23:17:11.362815 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-07-05 23:17:11.362821 | orchestrator | 2025-07-05 23:17:11.362828 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-07-05 23:17:11.362835 | orchestrator | Saturday 05 July 2025 23:14:22 +0000 (0:00:03.739) 0:00:19.678 ********* 2025-07-05 23:17:11.362842 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-05 23:17:11.362848 | orchestrator | 2025-07-05 23:17:11.362855 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-07-05 23:17:11.362862 | orchestrator | Saturday 05 July 2025 23:14:25 +0000 (0:00:03.191) 0:00:22.869 ********* 2025-07-05 23:17:11.362869 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-07-05 23:17:11.362875 | orchestrator | 2025-07-05 23:17:11.362882 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-07-05 23:17:11.362889 | orchestrator | Saturday 05 July 2025 23:14:29 +0000 (0:00:03.881) 0:00:26.751 ********* 2025-07-05 23:17:11.362912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.362939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.362955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.362964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.362973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.362986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.362995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363121 | orchestrator | 2025-07-05 23:17:11.363128 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-07-05 23:17:11.363136 | orchestrator | Saturday 05 July 2025 23:14:33 +0000 (0:00:03.591) 0:00:30.342 ********* 2025-07-05 23:17:11.363144 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:11.363152 | orchestrator | 2025-07-05 23:17:11.363159 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-07-05 23:17:11.363167 | orchestrator | Saturday 05 July 2025 23:14:33 +0000 (0:00:00.120) 0:00:30.463 ********* 2025-07-05 23:17:11.363175 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:11.363182 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:11.363190 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:11.363197 | orchestrator | 2025-07-05 23:17:11.363205 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-05 23:17:11.363212 | orchestrator | Saturday 05 July 2025 23:14:33 +0000 (0:00:00.390) 0:00:30.853 ********* 2025-07-05 23:17:11.363220 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:17:11.363232 | orchestrator | 2025-07-05 23:17:11.363240 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-07-05 23:17:11.363247 | orchestrator | Saturday 05 July 2025 23:14:35 +0000 (0:00:01.452) 0:00:32.305 ********* 2025-07-05 23:17:11.363259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.363274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.363283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.363291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.363490 | orchestrator | 2025-07-05 23:17:11.363502 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-07-05 23:17:11.363513 | orchestrator | Saturday 05 July 2025 23:14:41 +0000 (0:00:06.613) 0:00:38.919 ********* 2025-07-05 23:17:11.363534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.363542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:17:11.363554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.363563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:17:11.363603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363647 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:11.363660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363714 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:11.363721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.363732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:17:11.363746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363779 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:11.363786 | orchestrator | 2025-07-05 23:17:11.363793 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-07-05 23:17:11.363800 | orchestrator | Saturday 05 July 2025 23:14:42 +0000 (0:00:01.007) 0:00:39.926 ********* 2025-07-05 23:17:11.363808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.363818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:17:11.363829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363863 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:11.363870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.363878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:17:11.363889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363948 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:11.363956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.363966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:17:11.363974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.363993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.364004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.364012 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:11.364019 | orchestrator | 2025-07-05 23:17:11.364025 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-07-05 23:17:11.364033 | orchestrator | Saturday 05 July 2025 23:14:44 +0000 (0:00:01.847) 0:00:41.773 ********* 2025-07-05 23:17:11.364040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.364050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.364062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.364069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.364207 | orchestrator | 2025-07-05 23:17:11.364214 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-07-05 23:17:11.364221 | orchestrator | Saturday 05 July 2025 23:14:50 +0000 (0:00:06.101) 0:00:47.874 ********* 2025-07-05 23:17:11.365262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.365343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.365376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.365390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365642 | orchestrator | 2025-07-05 23:17:11.365656 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-07-05 23:17:11.365668 | orchestrator | Saturday 05 July 2025 23:15:12 +0000 (0:00:21.210) 0:01:09.085 ********* 2025-07-05 23:17:11.365707 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-05 23:17:11.365720 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-05 23:17:11.365731 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-05 23:17:11.365742 | orchestrator | 2025-07-05 23:17:11.365754 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-07-05 23:17:11.365765 | orchestrator | Saturday 05 July 2025 23:15:18 +0000 (0:00:06.481) 0:01:15.566 ********* 2025-07-05 23:17:11.365778 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-05 23:17:11.365791 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-05 23:17:11.365803 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-05 23:17:11.365815 | orchestrator | 2025-07-05 23:17:11.365828 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-07-05 23:17:11.365847 | orchestrator | Saturday 05 July 2025 23:15:22 +0000 (0:00:03.657) 0:01:19.224 ********* 2025-07-05 23:17:11.365861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.365880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.365902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.365916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.365929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.365950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.365963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.365981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366215 | orchestrator | 2025-07-05 23:17:11.366226 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-07-05 23:17:11.366238 | orchestrator | Saturday 05 July 2025 23:15:25 +0000 (0:00:03.105) 0:01:22.329 ********* 2025-07-05 23:17:11.366256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.366269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.366293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.366360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.366526 | orchestrator | 2025-07-05 23:17:11.366537 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-05 23:17:11.366549 | orchestrator | Saturday 05 July 2025 23:15:29 +0000 (0:00:04.027) 0:01:26.357 ********* 2025-07-05 23:17:11.366561 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:11.366573 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:11.366584 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:11.366595 | orchestrator | 2025-07-05 23:17:11.366606 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-07-05 23:17:11.366617 | orchestrator | Saturday 05 July 2025 23:15:29 +0000 (0:00:00.342) 0:01:26.699 ********* 2025-07-05 23:17:11.366634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.366658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:17:11.366701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.366786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:17:11.366798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366810 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:11.366827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366863 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:11.366875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-05 23:17:11.366899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-05 23:17:11.366911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:17:11.366964 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:11.366975 | orchestrator | 2025-07-05 23:17:11.366986 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-07-05 23:17:11.366998 | orchestrator | Saturday 05 July 2025 23:15:30 +0000 (0:00:00.775) 0:01:27.475 ********* 2025-07-05 23:17:11.367010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.367034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.367051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-05 23:17:11.367064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:17:11.367412 | orchestrator | 2025-07-05 23:17:11.367430 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-05 23:17:11.367449 | orchestrator | Saturday 05 July 2025 23:15:35 +0000 (0:00:04.679) 0:01:32.154 ********* 2025-07-05 23:17:11.367465 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:11.367476 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:11.367488 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:11.367498 | orchestrator | 2025-07-05 23:17:11.367509 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-07-05 23:17:11.367529 | orchestrator | Saturday 05 July 2025 23:15:35 +0000 (0:00:00.372) 0:01:32.526 ********* 2025-07-05 23:17:11.367540 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-07-05 23:17:11.367551 | orchestrator | 2025-07-05 23:17:11.367562 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-07-05 23:17:11.367573 | orchestrator | Saturday 05 July 2025 23:15:38 +0000 (0:00:02.561) 0:01:35.087 ********* 2025-07-05 23:17:11.367585 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-05 23:17:11.367596 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-07-05 23:17:11.367607 | orchestrator | 2025-07-05 23:17:11.367618 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-07-05 23:17:11.367629 | orchestrator | Saturday 05 July 2025 23:15:40 +0000 (0:00:02.292) 0:01:37.380 ********* 2025-07-05 23:17:11.367640 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:11.367651 | orchestrator | 2025-07-05 23:17:11.367663 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-05 23:17:11.367716 | orchestrator | Saturday 05 July 2025 23:15:57 +0000 (0:00:17.275) 0:01:54.655 ********* 2025-07-05 23:17:11.367730 | orchestrator | 2025-07-05 23:17:11.367742 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-05 23:17:11.367753 | orchestrator | Saturday 05 July 2025 23:15:57 +0000 (0:00:00.136) 0:01:54.792 ********* 2025-07-05 23:17:11.367764 | orchestrator | 2025-07-05 23:17:11.367775 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-05 23:17:11.367786 | orchestrator | Saturday 05 July 2025 23:15:57 +0000 (0:00:00.083) 0:01:54.876 ********* 2025-07-05 23:17:11.367797 | orchestrator | 2025-07-05 23:17:11.367808 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-07-05 23:17:11.367827 | orchestrator | Saturday 05 July 2025 23:15:57 +0000 (0:00:00.067) 0:01:54.944 ********* 2025-07-05 23:17:11.367839 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:17:11.367850 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:11.367861 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:17:11.367872 | orchestrator | 2025-07-05 23:17:11.367883 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-07-05 23:17:11.367894 | orchestrator | Saturday 05 July 2025 23:16:11 +0000 (0:00:13.817) 0:02:08.761 ********* 2025-07-05 23:17:11.367905 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:17:11.367916 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:11.367927 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:17:11.367938 | orchestrator | 2025-07-05 23:17:11.367949 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-07-05 23:17:11.367960 | orchestrator | Saturday 05 July 2025 23:16:22 +0000 (0:00:10.970) 0:02:19.731 ********* 2025-07-05 23:17:11.367971 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:11.367982 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:17:11.367993 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:17:11.368004 | orchestrator | 2025-07-05 23:17:11.368015 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-07-05 23:17:11.368027 | orchestrator | Saturday 05 July 2025 23:16:35 +0000 (0:00:13.034) 0:02:32.766 ********* 2025-07-05 23:17:11.368038 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:11.368049 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:17:11.368060 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:17:11.368071 | orchestrator | 2025-07-05 23:17:11.368082 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-07-05 23:17:11.368093 | orchestrator | Saturday 05 July 2025 23:16:49 +0000 (0:00:13.355) 0:02:46.121 ********* 2025-07-05 23:17:11.368104 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:11.368115 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:17:11.368126 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:17:11.368137 | orchestrator | 2025-07-05 23:17:11.368148 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-07-05 23:17:11.368173 | orchestrator | Saturday 05 July 2025 23:16:56 +0000 (0:00:07.247) 0:02:53.369 ********* 2025-07-05 23:17:11.368184 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:11.368195 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:17:11.368213 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:17:11.368225 | orchestrator | 2025-07-05 23:17:11.368236 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-07-05 23:17:11.368247 | orchestrator | Saturday 05 July 2025 23:17:03 +0000 (0:00:06.778) 0:03:00.147 ********* 2025-07-05 23:17:11.368258 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:11.368268 | orchestrator | 2025-07-05 23:17:11.368280 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:17:11.368293 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-05 23:17:11.368305 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 23:17:11.368317 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-05 23:17:11.368329 | orchestrator | 2025-07-05 23:17:11.368340 | orchestrator | 2025-07-05 23:17:11.368351 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:17:11.368362 | orchestrator | Saturday 05 July 2025 23:17:10 +0000 (0:00:07.529) 0:03:07.677 ********* 2025-07-05 23:17:11.368373 | orchestrator | =============================================================================== 2025-07-05 23:17:11.368384 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.21s 2025-07-05 23:17:11.368395 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.28s 2025-07-05 23:17:11.368405 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.82s 2025-07-05 23:17:11.368416 | orchestrator | designate : Restart designate-producer container ----------------------- 13.36s 2025-07-05 23:17:11.368427 | orchestrator | designate : Restart designate-central container ------------------------ 13.03s 2025-07-05 23:17:11.368438 | orchestrator | designate : Restart designate-api container ---------------------------- 10.97s 2025-07-05 23:17:11.368449 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.53s 2025-07-05 23:17:11.368460 | orchestrator | designate : Restart designate-mdns container ---------------------------- 7.25s 2025-07-05 23:17:11.368471 | orchestrator | designate : Restart designate-worker container -------------------------- 6.78s 2025-07-05 23:17:11.368482 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.61s 2025-07-05 23:17:11.368493 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.59s 2025-07-05 23:17:11.368510 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.48s 2025-07-05 23:17:11.368536 | orchestrator | designate : Copying over config.json files for services ----------------- 6.10s 2025-07-05 23:17:11.368561 | orchestrator | designate : Check designate containers ---------------------------------- 4.68s 2025-07-05 23:17:11.368579 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.03s 2025-07-05 23:17:11.368596 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.88s 2025-07-05 23:17:11.368613 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.74s 2025-07-05 23:17:11.368633 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.66s 2025-07-05 23:17:11.368652 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.62s 2025-07-05 23:17:11.368670 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.59s 2025-07-05 23:17:11.368719 | orchestrator | 2025-07-05 23:17:11 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:14.410454 | orchestrator | 2025-07-05 23:17:14 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:14.410599 | orchestrator | 2025-07-05 23:17:14 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:14.410865 | orchestrator | 2025-07-05 23:17:14 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state STARTED 2025-07-05 23:17:14.412152 | orchestrator | 2025-07-05 23:17:14 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:14.412185 | orchestrator | 2025-07-05 23:17:14 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:17.456059 | orchestrator | 2025-07-05 23:17:17 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:17.456835 | orchestrator | 2025-07-05 23:17:17 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:17.463287 | orchestrator | 2025-07-05 23:17:17 | INFO  | Task 86519373-588f-402d-b39d-81fdab993439 is in state SUCCESS 2025-07-05 23:17:17.465341 | orchestrator | 2025-07-05 23:17:17.465382 | orchestrator | 2025-07-05 23:17:17.465395 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:17:17.465407 | orchestrator | 2025-07-05 23:17:17.465419 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:17:17.465430 | orchestrator | Saturday 05 July 2025 23:13:02 +0000 (0:00:00.257) 0:00:00.257 ********* 2025-07-05 23:17:17.465452 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:17:17.465465 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:17:17.465476 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:17:17.465487 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:17:17.465498 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:17:17.465526 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:17:17.465537 | orchestrator | 2025-07-05 23:17:17.465548 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:17:17.465559 | orchestrator | Saturday 05 July 2025 23:13:02 +0000 (0:00:00.669) 0:00:00.927 ********* 2025-07-05 23:17:17.465571 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-07-05 23:17:17.465582 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-07-05 23:17:17.465593 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-07-05 23:17:17.465604 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-07-05 23:17:17.465615 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-07-05 23:17:17.465626 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-07-05 23:17:17.465637 | orchestrator | 2025-07-05 23:17:17.465648 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-07-05 23:17:17.465659 | orchestrator | 2025-07-05 23:17:17.465697 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-05 23:17:17.465711 | orchestrator | Saturday 05 July 2025 23:13:03 +0000 (0:00:00.590) 0:00:01.517 ********* 2025-07-05 23:17:17.465723 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:17:17.465736 | orchestrator | 2025-07-05 23:17:17.465747 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-07-05 23:17:17.465758 | orchestrator | Saturday 05 July 2025 23:13:04 +0000 (0:00:01.252) 0:00:02.770 ********* 2025-07-05 23:17:17.465769 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:17:17.465780 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:17:17.465791 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:17:17.465804 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:17:17.465823 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:17:17.465834 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:17:17.465851 | orchestrator | 2025-07-05 23:17:17.465867 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-07-05 23:17:17.465881 | orchestrator | Saturday 05 July 2025 23:13:06 +0000 (0:00:01.520) 0:00:04.291 ********* 2025-07-05 23:17:17.465924 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:17:17.465938 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:17:17.465950 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:17:17.465963 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:17:17.465975 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:17:17.465987 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:17:17.466000 | orchestrator | 2025-07-05 23:17:17.466082 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-07-05 23:17:17.466107 | orchestrator | Saturday 05 July 2025 23:13:07 +0000 (0:00:01.107) 0:00:05.398 ********* 2025-07-05 23:17:17.466120 | orchestrator | ok: [testbed-node-0] => { 2025-07-05 23:17:17.466134 | orchestrator |  "changed": false, 2025-07-05 23:17:17.466146 | orchestrator |  "msg": "All assertions passed" 2025-07-05 23:17:17.466158 | orchestrator | } 2025-07-05 23:17:17.466172 | orchestrator | ok: [testbed-node-1] => { 2025-07-05 23:17:17.466184 | orchestrator |  "changed": false, 2025-07-05 23:17:17.466196 | orchestrator |  "msg": "All assertions passed" 2025-07-05 23:17:17.466209 | orchestrator | } 2025-07-05 23:17:17.466220 | orchestrator | ok: [testbed-node-2] => { 2025-07-05 23:17:17.466231 | orchestrator |  "changed": false, 2025-07-05 23:17:17.466242 | orchestrator |  "msg": "All assertions passed" 2025-07-05 23:17:17.466253 | orchestrator | } 2025-07-05 23:17:17.466264 | orchestrator | ok: [testbed-node-3] => { 2025-07-05 23:17:17.466275 | orchestrator |  "changed": false, 2025-07-05 23:17:17.466286 | orchestrator |  "msg": "All assertions passed" 2025-07-05 23:17:17.466297 | orchestrator | } 2025-07-05 23:17:17.466314 | orchestrator | ok: [testbed-node-4] => { 2025-07-05 23:17:17.466328 | orchestrator |  "changed": false, 2025-07-05 23:17:17.466339 | orchestrator |  "msg": "All assertions passed" 2025-07-05 23:17:17.466350 | orchestrator | } 2025-07-05 23:17:17.466361 | orchestrator | ok: [testbed-node-5] => { 2025-07-05 23:17:17.466372 | orchestrator |  "changed": false, 2025-07-05 23:17:17.466383 | orchestrator |  "msg": "All assertions passed" 2025-07-05 23:17:17.466394 | orchestrator | } 2025-07-05 23:17:17.466405 | orchestrator | 2025-07-05 23:17:17.466416 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-07-05 23:17:17.466428 | orchestrator | Saturday 05 July 2025 23:13:08 +0000 (0:00:00.829) 0:00:06.228 ********* 2025-07-05 23:17:17.466439 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.466450 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.466461 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.466472 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.466482 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.466493 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.466505 | orchestrator | 2025-07-05 23:17:17.466516 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-07-05 23:17:17.466528 | orchestrator | Saturday 05 July 2025 23:13:08 +0000 (0:00:00.650) 0:00:06.879 ********* 2025-07-05 23:17:17.466539 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-07-05 23:17:17.466550 | orchestrator | 2025-07-05 23:17:17.466561 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-07-05 23:17:17.466572 | orchestrator | Saturday 05 July 2025 23:13:12 +0000 (0:00:03.651) 0:00:10.530 ********* 2025-07-05 23:17:17.466584 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-07-05 23:17:17.466596 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-07-05 23:17:17.466607 | orchestrator | 2025-07-05 23:17:17.466642 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-07-05 23:17:17.466654 | orchestrator | Saturday 05 July 2025 23:13:18 +0000 (0:00:06.372) 0:00:16.903 ********* 2025-07-05 23:17:17.466666 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-05 23:17:17.466707 | orchestrator | 2025-07-05 23:17:17.466719 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-07-05 23:17:17.466741 | orchestrator | Saturday 05 July 2025 23:13:22 +0000 (0:00:03.168) 0:00:20.071 ********* 2025-07-05 23:17:17.466752 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-05 23:17:17.466770 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-07-05 23:17:17.466782 | orchestrator | 2025-07-05 23:17:17.466793 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-07-05 23:17:17.466804 | orchestrator | Saturday 05 July 2025 23:13:26 +0000 (0:00:04.014) 0:00:24.085 ********* 2025-07-05 23:17:17.466815 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-05 23:17:17.466826 | orchestrator | 2025-07-05 23:17:17.466837 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-07-05 23:17:17.466848 | orchestrator | Saturday 05 July 2025 23:13:29 +0000 (0:00:03.542) 0:00:27.627 ********* 2025-07-05 23:17:17.466859 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-07-05 23:17:17.466870 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-07-05 23:17:17.466881 | orchestrator | 2025-07-05 23:17:17.466892 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-05 23:17:17.466903 | orchestrator | Saturday 05 July 2025 23:13:37 +0000 (0:00:07.896) 0:00:35.523 ********* 2025-07-05 23:17:17.466914 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.466925 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.466936 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.466947 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.466958 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.466969 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.466980 | orchestrator | 2025-07-05 23:17:17.466991 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-07-05 23:17:17.467002 | orchestrator | Saturday 05 July 2025 23:13:38 +0000 (0:00:00.728) 0:00:36.251 ********* 2025-07-05 23:17:17.467013 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.467024 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.467035 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.467046 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.467057 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.467075 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.467094 | orchestrator | 2025-07-05 23:17:17.467113 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-07-05 23:17:17.467133 | orchestrator | Saturday 05 July 2025 23:13:40 +0000 (0:00:02.130) 0:00:38.382 ********* 2025-07-05 23:17:17.467144 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:17:17.467155 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:17:17.467166 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:17:17.467177 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:17:17.467188 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:17:17.467199 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:17:17.467210 | orchestrator | 2025-07-05 23:17:17.467221 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-05 23:17:17.467232 | orchestrator | Saturday 05 July 2025 23:13:41 +0000 (0:00:01.144) 0:00:39.526 ********* 2025-07-05 23:17:17.467243 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.467254 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.467265 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.467276 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.467287 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.467298 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.467309 | orchestrator | 2025-07-05 23:17:17.467320 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-07-05 23:17:17.467330 | orchestrator | Saturday 05 July 2025 23:13:43 +0000 (0:00:02.010) 0:00:41.537 ********* 2025-07-05 23:17:17.467345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.467383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.467402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.467415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.467428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.467447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.467459 | orchestrator | 2025-07-05 23:17:17.467470 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-07-05 23:17:17.467482 | orchestrator | Saturday 05 July 2025 23:13:46 +0000 (0:00:02.883) 0:00:44.420 ********* 2025-07-05 23:17:17.467493 | orchestrator | [WARNING]: Skipped 2025-07-05 23:17:17.467504 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-07-05 23:17:17.467516 | orchestrator | due to this access issue: 2025-07-05 23:17:17.467527 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-07-05 23:17:17.467538 | orchestrator | a directory 2025-07-05 23:17:17.467549 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 23:17:17.467560 | orchestrator | 2025-07-05 23:17:17.467571 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-05 23:17:17.467589 | orchestrator | Saturday 05 July 2025 23:13:47 +0000 (0:00:00.838) 0:00:45.259 ********* 2025-07-05 23:17:17.467601 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:17:17.467614 | orchestrator | 2025-07-05 23:17:17.467625 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-07-05 23:17:17.467636 | orchestrator | Saturday 05 July 2025 23:13:48 +0000 (0:00:01.360) 0:00:46.619 ********* 2025-07-05 23:17:17.467653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.467666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.467727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.467740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.467760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.467779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.467791 | orchestrator | 2025-07-05 23:17:17.467802 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-07-05 23:17:17.467813 | orchestrator | Saturday 05 July 2025 23:13:53 +0000 (0:00:04.926) 0:00:51.546 ********* 2025-07-05 23:17:17.467826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.467844 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.467856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.467868 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.467880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.467897 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.467921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.467934 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.467945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.467957 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.467969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.467987 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.467998 | orchestrator | 2025-07-05 23:17:17.468009 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-07-05 23:17:17.468021 | orchestrator | Saturday 05 July 2025 23:13:55 +0000 (0:00:02.422) 0:00:53.968 ********* 2025-07-05 23:17:17.468033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.468044 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.468063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.468076 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.468093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.468114 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.468133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.468166 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.468187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.468207 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.468226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.468238 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.468249 | orchestrator | 2025-07-05 23:17:17.468260 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-07-05 23:17:17.468271 | orchestrator | Saturday 05 July 2025 23:13:58 +0000 (0:00:02.497) 0:00:56.465 ********* 2025-07-05 23:17:17.468282 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.468293 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.468304 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.468315 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.468326 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.468337 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.468348 | orchestrator | 2025-07-05 23:17:17.468359 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-07-05 23:17:17.468376 | orchestrator | Saturday 05 July 2025 23:14:01 +0000 (0:00:02.679) 0:00:59.145 ********* 2025-07-05 23:17:17.468388 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.468399 | orchestrator | 2025-07-05 23:17:17.468410 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-07-05 23:17:17.468421 | orchestrator | Saturday 05 July 2025 23:14:01 +0000 (0:00:00.116) 0:00:59.262 ********* 2025-07-05 23:17:17.468432 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.468443 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.468454 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.468465 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.468476 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.468494 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.468513 | orchestrator | 2025-07-05 23:17:17.468539 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-07-05 23:17:17.468558 | orchestrator | Saturday 05 July 2025 23:14:01 +0000 (0:00:00.587) 0:00:59.849 ********* 2025-07-05 23:17:17.468576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.468594 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.468612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.468630 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.468648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.468668 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.468842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.468861 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.468897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.468921 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.468932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.468944 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.468955 | orchestrator | 2025-07-05 23:17:17.468966 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-07-05 23:17:17.468977 | orchestrator | Saturday 05 July 2025 23:14:04 +0000 (0:00:02.219) 0:01:02.069 ********* 2025-07-05 23:17:17.468988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.468998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.469017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.469049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469059 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.469070 | orchestrator | 2025-07-05 23:17:17.469080 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-07-05 23:17:17.469090 | orchestrator | Saturday 05 July 2025 23:14:07 +0000 (0:00:03.693) 0:01:05.763 ********* 2025-07-05 23:17:17.469100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.469180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.469190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.469206 | orchestrator | 2025-07-05 23:17:17.469216 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-07-05 23:17:17.469226 | orchestrator | Saturday 05 July 2025 23:14:15 +0000 (0:00:07.673) 0:01:13.436 ********* 2025-07-05 23:17:17.469244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.469259 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.469270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.469280 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.469291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.469301 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.469311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469364 | orchestrator | 2025-07-05 23:17:17.469374 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-07-05 23:17:17.469384 | orchestrator | Saturday 05 July 2025 23:14:19 +0000 (0:00:03.774) 0:01:17.211 ********* 2025-07-05 23:17:17.469394 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.469404 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.469414 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.469423 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:17:17.469433 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:17:17.469443 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:17.469452 | orchestrator | 2025-07-05 23:17:17.469462 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-07-05 23:17:17.469472 | orchestrator | Saturday 05 July 2025 23:14:22 +0000 (0:00:03.186) 0:01:20.398 ********* 2025-07-05 23:17:17.469482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.469492 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.469502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.469513 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.469523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.469539 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.469555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.469592 | orchestrator | 2025-07-05 23:17:17.469602 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-07-05 23:17:17.469612 | orchestrator | Saturday 05 July 2025 23:14:26 +0000 (0:00:03.893) 0:01:24.291 ********* 2025-07-05 23:17:17.469621 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.469631 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.469641 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.469651 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.469660 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.469694 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.469705 | orchestrator | 2025-07-05 23:17:17.469715 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-07-05 23:17:17.469725 | orchestrator | Saturday 05 July 2025 23:14:28 +0000 (0:00:02.094) 0:01:26.386 ********* 2025-07-05 23:17:17.469734 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.469744 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.469754 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.469763 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.469773 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.469783 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.469793 | orchestrator | 2025-07-05 23:17:17.469802 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-07-05 23:17:17.469812 | orchestrator | Saturday 05 July 2025 23:14:31 +0000 (0:00:02.718) 0:01:29.104 ********* 2025-07-05 23:17:17.469822 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.469832 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.469841 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.469851 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.469861 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.469871 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.469880 | orchestrator | 2025-07-05 23:17:17.469890 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-07-05 23:17:17.469900 | orchestrator | Saturday 05 July 2025 23:14:33 +0000 (0:00:02.198) 0:01:31.303 ********* 2025-07-05 23:17:17.469910 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.469919 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.469929 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.469938 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.469948 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.469958 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.469968 | orchestrator | 2025-07-05 23:17:17.469978 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-07-05 23:17:17.469987 | orchestrator | Saturday 05 July 2025 23:14:35 +0000 (0:00:02.358) 0:01:33.661 ********* 2025-07-05 23:17:17.469997 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.470007 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.470046 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.470059 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.470069 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.470079 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.470088 | orchestrator | 2025-07-05 23:17:17.470105 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-07-05 23:17:17.470115 | orchestrator | Saturday 05 July 2025 23:14:37 +0000 (0:00:01.830) 0:01:35.492 ********* 2025-07-05 23:17:17.470125 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.470135 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.470144 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.470154 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.470163 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.470173 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.470183 | orchestrator | 2025-07-05 23:17:17.470192 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-07-05 23:17:17.470207 | orchestrator | Saturday 05 July 2025 23:14:40 +0000 (0:00:02.712) 0:01:38.204 ********* 2025-07-05 23:17:17.470217 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-05 23:17:17.470227 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.470237 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-05 23:17:17.470247 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.470257 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-05 23:17:17.470273 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.470283 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-05 23:17:17.470292 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.470302 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-05 23:17:17.470312 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.470321 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-05 23:17:17.470331 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.470341 | orchestrator | 2025-07-05 23:17:17.470351 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-07-05 23:17:17.470360 | orchestrator | Saturday 05 July 2025 23:14:42 +0000 (0:00:02.309) 0:01:40.514 ********* 2025-07-05 23:17:17.470371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.470381 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.470392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.470402 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.470412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.470428 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.470443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.470459 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.470469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.470479 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.470489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.470499 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.470508 | orchestrator | 2025-07-05 23:17:17.470518 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-07-05 23:17:17.470528 | orchestrator | Saturday 05 July 2025 23:14:45 +0000 (0:00:02.504) 0:01:43.019 ********* 2025-07-05 23:17:17.470538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.470548 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.470566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.470601 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.470611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.470622 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.470632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.470642 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.470652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.470662 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.470726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.470738 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.470748 | orchestrator | 2025-07-05 23:17:17.470758 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-07-05 23:17:17.470775 | orchestrator | Saturday 05 July 2025 23:14:47 +0000 (0:00:02.288) 0:01:45.307 ********* 2025-07-05 23:17:17.470785 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.470795 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.470805 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.470814 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.470824 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.470961 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.470974 | orchestrator | 2025-07-05 23:17:17.470984 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-07-05 23:17:17.470994 | orchestrator | Saturday 05 July 2025 23:14:49 +0000 (0:00:01.946) 0:01:47.254 ********* 2025-07-05 23:17:17.471004 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.471014 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.471024 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.471033 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:17:17.471043 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:17:17.471053 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:17:17.471062 | orchestrator | 2025-07-05 23:17:17.471078 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-07-05 23:17:17.471088 | orchestrator | Saturday 05 July 2025 23:14:55 +0000 (0:00:05.768) 0:01:53.023 ********* 2025-07-05 23:17:17.471098 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.471107 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.471117 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.471127 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.471137 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.471146 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.471156 | orchestrator | 2025-07-05 23:17:17.471166 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-07-05 23:17:17.471176 | orchestrator | Saturday 05 July 2025 23:14:57 +0000 (0:00:02.789) 0:01:55.813 ********* 2025-07-05 23:17:17.471185 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.471195 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.471205 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.471214 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.471224 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.471233 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.471243 | orchestrator | 2025-07-05 23:17:17.471253 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-07-05 23:17:17.471263 | orchestrator | Saturday 05 July 2025 23:15:00 +0000 (0:00:03.101) 0:01:58.915 ********* 2025-07-05 23:17:17.471288 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.471308 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.471318 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.471328 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.471337 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.471347 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.471356 | orchestrator | 2025-07-05 23:17:17.471366 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-07-05 23:17:17.471376 | orchestrator | Saturday 05 July 2025 23:15:04 +0000 (0:00:03.509) 0:02:02.425 ********* 2025-07-05 23:17:17.471386 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.471395 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.471405 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.471415 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.471424 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.471434 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.471444 | orchestrator | 2025-07-05 23:17:17.471453 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-07-05 23:17:17.471463 | orchestrator | Saturday 05 July 2025 23:15:07 +0000 (0:00:02.683) 0:02:05.109 ********* 2025-07-05 23:17:17.471473 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.471483 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.471500 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.471509 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.471519 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.471529 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.471538 | orchestrator | 2025-07-05 23:17:17.471548 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-07-05 23:17:17.471558 | orchestrator | Saturday 05 July 2025 23:15:10 +0000 (0:00:03.442) 0:02:08.551 ********* 2025-07-05 23:17:17.471568 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.471578 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.471589 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.471599 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.471610 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.471621 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.471631 | orchestrator | 2025-07-05 23:17:17.471642 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-07-05 23:17:17.471653 | orchestrator | Saturday 05 July 2025 23:15:13 +0000 (0:00:02.734) 0:02:11.286 ********* 2025-07-05 23:17:17.471664 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.471691 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.471701 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.471711 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.471720 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.471730 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.471740 | orchestrator | 2025-07-05 23:17:17.471749 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-07-05 23:17:17.471759 | orchestrator | Saturday 05 July 2025 23:15:17 +0000 (0:00:04.039) 0:02:15.326 ********* 2025-07-05 23:17:17.471769 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.471779 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.471788 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.471798 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.471807 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.471817 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.471826 | orchestrator | 2025-07-05 23:17:17.471836 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-07-05 23:17:17.471846 | orchestrator | Saturday 05 July 2025 23:15:19 +0000 (0:00:02.471) 0:02:17.797 ********* 2025-07-05 23:17:17.471856 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-05 23:17:17.471866 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.471876 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-05 23:17:17.471885 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.471901 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-05 23:17:17.471911 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.471921 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-05 23:17:17.471930 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.471940 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-05 23:17:17.471950 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.471965 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-05 23:17:17.471974 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.471984 | orchestrator | 2025-07-05 23:17:17.471994 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-07-05 23:17:17.472004 | orchestrator | Saturday 05 July 2025 23:15:23 +0000 (0:00:03.363) 0:02:21.161 ********* 2025-07-05 23:17:17.472014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.472031 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.472041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.472051 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.472062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.472072 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.472088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-05 23:17:17.472098 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.472116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.472132 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.472142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-05 23:17:17.472152 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.472162 | orchestrator | 2025-07-05 23:17:17.472172 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-07-05 23:17:17.472182 | orchestrator | Saturday 05 July 2025 23:15:25 +0000 (0:00:02.150) 0:02:23.312 ********* 2025-07-05 23:17:17.472192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.472203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.472219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.472240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.472251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-05 23:17:17.472262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-05 23:17:17.472272 | orchestrator | 2025-07-05 23:17:17.472282 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-05 23:17:17.472292 | orchestrator | Saturday 05 July 2025 23:15:29 +0000 (0:00:04.121) 0:02:27.434 ********* 2025-07-05 23:17:17.472302 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:17:17.472312 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:17:17.472322 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:17:17.472332 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:17:17.472341 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:17:17.472351 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:17:17.472361 | orchestrator | 2025-07-05 23:17:17.472370 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-07-05 23:17:17.472380 | orchestrator | Saturday 05 July 2025 23:15:29 +0000 (0:00:00.412) 0:02:27.846 ********* 2025-07-05 23:17:17.472390 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:17.472400 | orchestrator | 2025-07-05 23:17:17.472409 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-07-05 23:17:17.472419 | orchestrator | Saturday 05 July 2025 23:15:31 +0000 (0:00:01.997) 0:02:29.843 ********* 2025-07-05 23:17:17.472429 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:17.472439 | orchestrator | 2025-07-05 23:17:17.472448 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-07-05 23:17:17.472458 | orchestrator | Saturday 05 July 2025 23:15:34 +0000 (0:00:02.226) 0:02:32.070 ********* 2025-07-05 23:17:17.472468 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:17.472483 | orchestrator | 2025-07-05 23:17:17.472493 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-05 23:17:17.472503 | orchestrator | Saturday 05 July 2025 23:16:17 +0000 (0:00:43.269) 0:03:15.340 ********* 2025-07-05 23:17:17.472513 | orchestrator | 2025-07-05 23:17:17.472522 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-05 23:17:17.472532 | orchestrator | Saturday 05 July 2025 23:16:17 +0000 (0:00:00.166) 0:03:15.506 ********* 2025-07-05 23:17:17.472542 | orchestrator | 2025-07-05 23:17:17.472552 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-05 23:17:17.472567 | orchestrator | Saturday 05 July 2025 23:16:17 +0000 (0:00:00.415) 0:03:15.922 ********* 2025-07-05 23:17:17.472577 | orchestrator | 2025-07-05 23:17:17.472587 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-05 23:17:17.472596 | orchestrator | Saturday 05 July 2025 23:16:17 +0000 (0:00:00.071) 0:03:15.994 ********* 2025-07-05 23:17:17.472606 | orchestrator | 2025-07-05 23:17:17.472616 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-05 23:17:17.472626 | orchestrator | Saturday 05 July 2025 23:16:18 +0000 (0:00:00.166) 0:03:16.160 ********* 2025-07-05 23:17:17.472636 | orchestrator | 2025-07-05 23:17:17.472650 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-05 23:17:17.472660 | orchestrator | Saturday 05 July 2025 23:16:18 +0000 (0:00:00.113) 0:03:16.274 ********* 2025-07-05 23:17:17.472683 | orchestrator | 2025-07-05 23:17:17.472694 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-07-05 23:17:17.472704 | orchestrator | Saturday 05 July 2025 23:16:18 +0000 (0:00:00.066) 0:03:16.340 ********* 2025-07-05 23:17:17.472713 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:17:17.472723 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:17:17.472733 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:17:17.472743 | orchestrator | 2025-07-05 23:17:17.472753 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-07-05 23:17:17.472763 | orchestrator | Saturday 05 July 2025 23:16:49 +0000 (0:00:30.687) 0:03:47.027 ********* 2025-07-05 23:17:17.472773 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:17:17.472782 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:17:17.472792 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:17:17.472802 | orchestrator | 2025-07-05 23:17:17.472812 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:17:17.472823 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-05 23:17:17.472834 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-05 23:17:17.472844 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-05 23:17:17.472854 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-05 23:17:17.472864 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-05 23:17:17.472874 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-05 23:17:17.472884 | orchestrator | 2025-07-05 23:17:17.472894 | orchestrator | 2025-07-05 23:17:17.472904 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:17:17.472914 | orchestrator | Saturday 05 July 2025 23:17:16 +0000 (0:00:27.895) 0:04:14.923 ********* 2025-07-05 23:17:17.472924 | orchestrator | =============================================================================== 2025-07-05 23:17:17.472939 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.27s 2025-07-05 23:17:17.472949 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.69s 2025-07-05 23:17:17.472959 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 27.90s 2025-07-05 23:17:17.472968 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.90s 2025-07-05 23:17:17.472978 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.67s 2025-07-05 23:17:17.472988 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.37s 2025-07-05 23:17:17.472998 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.77s 2025-07-05 23:17:17.473007 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.93s 2025-07-05 23:17:17.473017 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.12s 2025-07-05 23:17:17.473027 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.04s 2025-07-05 23:17:17.473037 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.01s 2025-07-05 23:17:17.473047 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.89s 2025-07-05 23:17:17.473056 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.77s 2025-07-05 23:17:17.473066 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.69s 2025-07-05 23:17:17.473076 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.65s 2025-07-05 23:17:17.473085 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.54s 2025-07-05 23:17:17.473095 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.51s 2025-07-05 23:17:17.473105 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.44s 2025-07-05 23:17:17.473115 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.36s 2025-07-05 23:17:17.473125 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.19s 2025-07-05 23:17:17.473139 | orchestrator | 2025-07-05 23:17:17 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:17.473150 | orchestrator | 2025-07-05 23:17:17 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:20.490280 | orchestrator | 2025-07-05 23:17:20 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:20.492933 | orchestrator | 2025-07-05 23:17:20 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:20.496109 | orchestrator | 2025-07-05 23:17:20 | INFO  | Task 9ce68c1b-07da-425b-81e1-eefb37dee2fb is in state STARTED 2025-07-05 23:17:20.498594 | orchestrator | 2025-07-05 23:17:20 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:20.498622 | orchestrator | 2025-07-05 23:17:20 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:23.546836 | orchestrator | 2025-07-05 23:17:23 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:23.548528 | orchestrator | 2025-07-05 23:17:23 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:23.550513 | orchestrator | 2025-07-05 23:17:23 | INFO  | Task 9ce68c1b-07da-425b-81e1-eefb37dee2fb is in state STARTED 2025-07-05 23:17:23.551974 | orchestrator | 2025-07-05 23:17:23 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:23.552053 | orchestrator | 2025-07-05 23:17:23 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:26.588215 | orchestrator | 2025-07-05 23:17:26 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:26.592760 | orchestrator | 2025-07-05 23:17:26 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:26.593480 | orchestrator | 2025-07-05 23:17:26 | INFO  | Task 9ce68c1b-07da-425b-81e1-eefb37dee2fb is in state SUCCESS 2025-07-05 23:17:26.595755 | orchestrator | 2025-07-05 23:17:26 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:26.599127 | orchestrator | 2025-07-05 23:17:26 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:26.599175 | orchestrator | 2025-07-05 23:17:26 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:29.644316 | orchestrator | 2025-07-05 23:17:29 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:29.645896 | orchestrator | 2025-07-05 23:17:29 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:29.645926 | orchestrator | 2025-07-05 23:17:29 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:29.645938 | orchestrator | 2025-07-05 23:17:29 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:29.645950 | orchestrator | 2025-07-05 23:17:29 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:32.686646 | orchestrator | 2025-07-05 23:17:32 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:32.688324 | orchestrator | 2025-07-05 23:17:32 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:32.689095 | orchestrator | 2025-07-05 23:17:32 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:32.690353 | orchestrator | 2025-07-05 23:17:32 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:32.690395 | orchestrator | 2025-07-05 23:17:32 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:35.733463 | orchestrator | 2025-07-05 23:17:35 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:35.733941 | orchestrator | 2025-07-05 23:17:35 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:35.735313 | orchestrator | 2025-07-05 23:17:35 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:35.737939 | orchestrator | 2025-07-05 23:17:35 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:35.737964 | orchestrator | 2025-07-05 23:17:35 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:38.784716 | orchestrator | 2025-07-05 23:17:38 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:38.784818 | orchestrator | 2025-07-05 23:17:38 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:38.784832 | orchestrator | 2025-07-05 23:17:38 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:38.784986 | orchestrator | 2025-07-05 23:17:38 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:38.785004 | orchestrator | 2025-07-05 23:17:38 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:41.821306 | orchestrator | 2025-07-05 23:17:41 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:41.824005 | orchestrator | 2025-07-05 23:17:41 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:41.825570 | orchestrator | 2025-07-05 23:17:41 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:41.827810 | orchestrator | 2025-07-05 23:17:41 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:41.827894 | orchestrator | 2025-07-05 23:17:41 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:44.871243 | orchestrator | 2025-07-05 23:17:44 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:44.871553 | orchestrator | 2025-07-05 23:17:44 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:44.873809 | orchestrator | 2025-07-05 23:17:44 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:44.874790 | orchestrator | 2025-07-05 23:17:44 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:44.874833 | orchestrator | 2025-07-05 23:17:44 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:47.920004 | orchestrator | 2025-07-05 23:17:47 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:47.922525 | orchestrator | 2025-07-05 23:17:47 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:47.924989 | orchestrator | 2025-07-05 23:17:47 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:47.927849 | orchestrator | 2025-07-05 23:17:47 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:47.927879 | orchestrator | 2025-07-05 23:17:47 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:50.978500 | orchestrator | 2025-07-05 23:17:50 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:50.978785 | orchestrator | 2025-07-05 23:17:50 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:50.979982 | orchestrator | 2025-07-05 23:17:50 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:50.981331 | orchestrator | 2025-07-05 23:17:50 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:50.981353 | orchestrator | 2025-07-05 23:17:50 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:54.023518 | orchestrator | 2025-07-05 23:17:54 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:54.024088 | orchestrator | 2025-07-05 23:17:54 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:54.026359 | orchestrator | 2025-07-05 23:17:54 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:54.028827 | orchestrator | 2025-07-05 23:17:54 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:54.028878 | orchestrator | 2025-07-05 23:17:54 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:17:57.065404 | orchestrator | 2025-07-05 23:17:57 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:17:57.065506 | orchestrator | 2025-07-05 23:17:57 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:17:57.066250 | orchestrator | 2025-07-05 23:17:57 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:17:57.067202 | orchestrator | 2025-07-05 23:17:57 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:17:57.067265 | orchestrator | 2025-07-05 23:17:57 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:00.101600 | orchestrator | 2025-07-05 23:18:00 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:00.102114 | orchestrator | 2025-07-05 23:18:00 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:00.103486 | orchestrator | 2025-07-05 23:18:00 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:00.104412 | orchestrator | 2025-07-05 23:18:00 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:00.104432 | orchestrator | 2025-07-05 23:18:00 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:03.140144 | orchestrator | 2025-07-05 23:18:03 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:03.140797 | orchestrator | 2025-07-05 23:18:03 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:03.142110 | orchestrator | 2025-07-05 23:18:03 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:03.143303 | orchestrator | 2025-07-05 23:18:03 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:03.143818 | orchestrator | 2025-07-05 23:18:03 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:06.193487 | orchestrator | 2025-07-05 23:18:06 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:06.194727 | orchestrator | 2025-07-05 23:18:06 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:06.196360 | orchestrator | 2025-07-05 23:18:06 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:06.197310 | orchestrator | 2025-07-05 23:18:06 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:06.197333 | orchestrator | 2025-07-05 23:18:06 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:09.243589 | orchestrator | 2025-07-05 23:18:09 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:09.245203 | orchestrator | 2025-07-05 23:18:09 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:09.246288 | orchestrator | 2025-07-05 23:18:09 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:09.247762 | orchestrator | 2025-07-05 23:18:09 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:09.247783 | orchestrator | 2025-07-05 23:18:09 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:12.294305 | orchestrator | 2025-07-05 23:18:12 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:12.295765 | orchestrator | 2025-07-05 23:18:12 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:12.297177 | orchestrator | 2025-07-05 23:18:12 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:12.298704 | orchestrator | 2025-07-05 23:18:12 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:12.298738 | orchestrator | 2025-07-05 23:18:12 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:15.352548 | orchestrator | 2025-07-05 23:18:15 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:15.354467 | orchestrator | 2025-07-05 23:18:15 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:15.355969 | orchestrator | 2025-07-05 23:18:15 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:15.357345 | orchestrator | 2025-07-05 23:18:15 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:15.357428 | orchestrator | 2025-07-05 23:18:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:18.401057 | orchestrator | 2025-07-05 23:18:18 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:18.403371 | orchestrator | 2025-07-05 23:18:18 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:18.405731 | orchestrator | 2025-07-05 23:18:18 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:18.408302 | orchestrator | 2025-07-05 23:18:18 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:18.408665 | orchestrator | 2025-07-05 23:18:18 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:21.459612 | orchestrator | 2025-07-05 23:18:21 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:21.460987 | orchestrator | 2025-07-05 23:18:21 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:21.462542 | orchestrator | 2025-07-05 23:18:21 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:21.463827 | orchestrator | 2025-07-05 23:18:21 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:21.463854 | orchestrator | 2025-07-05 23:18:21 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:24.502121 | orchestrator | 2025-07-05 23:18:24 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:24.502440 | orchestrator | 2025-07-05 23:18:24 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:24.503144 | orchestrator | 2025-07-05 23:18:24 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:24.504499 | orchestrator | 2025-07-05 23:18:24 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:24.504582 | orchestrator | 2025-07-05 23:18:24 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:27.531496 | orchestrator | 2025-07-05 23:18:27 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:27.531909 | orchestrator | 2025-07-05 23:18:27 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:27.532738 | orchestrator | 2025-07-05 23:18:27 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:27.533918 | orchestrator | 2025-07-05 23:18:27 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:27.533948 | orchestrator | 2025-07-05 23:18:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:30.557060 | orchestrator | 2025-07-05 23:18:30 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:30.559100 | orchestrator | 2025-07-05 23:18:30 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:30.559604 | orchestrator | 2025-07-05 23:18:30 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:30.560028 | orchestrator | 2025-07-05 23:18:30 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:30.560062 | orchestrator | 2025-07-05 23:18:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:33.605543 | orchestrator | 2025-07-05 23:18:33 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:33.606530 | orchestrator | 2025-07-05 23:18:33 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:33.608123 | orchestrator | 2025-07-05 23:18:33 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:33.610542 | orchestrator | 2025-07-05 23:18:33 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:33.610918 | orchestrator | 2025-07-05 23:18:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:36.658171 | orchestrator | 2025-07-05 23:18:36 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:36.659090 | orchestrator | 2025-07-05 23:18:36 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:36.661781 | orchestrator | 2025-07-05 23:18:36 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:36.663046 | orchestrator | 2025-07-05 23:18:36 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:36.663072 | orchestrator | 2025-07-05 23:18:36 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:39.708789 | orchestrator | 2025-07-05 23:18:39 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:39.711121 | orchestrator | 2025-07-05 23:18:39 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:39.712609 | orchestrator | 2025-07-05 23:18:39 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:39.715941 | orchestrator | 2025-07-05 23:18:39 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:39.716232 | orchestrator | 2025-07-05 23:18:39 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:42.762996 | orchestrator | 2025-07-05 23:18:42 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:42.764976 | orchestrator | 2025-07-05 23:18:42 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:42.767929 | orchestrator | 2025-07-05 23:18:42 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:42.770899 | orchestrator | 2025-07-05 23:18:42 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:42.772708 | orchestrator | 2025-07-05 23:18:42 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:45.822517 | orchestrator | 2025-07-05 23:18:45 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:45.822621 | orchestrator | 2025-07-05 23:18:45 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:45.823837 | orchestrator | 2025-07-05 23:18:45 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:45.824561 | orchestrator | 2025-07-05 23:18:45 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:45.824586 | orchestrator | 2025-07-05 23:18:45 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:48.869969 | orchestrator | 2025-07-05 23:18:48 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:48.870541 | orchestrator | 2025-07-05 23:18:48 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:48.871874 | orchestrator | 2025-07-05 23:18:48 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:48.872868 | orchestrator | 2025-07-05 23:18:48 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:48.872989 | orchestrator | 2025-07-05 23:18:48 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:51.900994 | orchestrator | 2025-07-05 23:18:51 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:51.901136 | orchestrator | 2025-07-05 23:18:51 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:51.902100 | orchestrator | 2025-07-05 23:18:51 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:51.903321 | orchestrator | 2025-07-05 23:18:51 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:51.903342 | orchestrator | 2025-07-05 23:18:51 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:54.936084 | orchestrator | 2025-07-05 23:18:54 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:54.936267 | orchestrator | 2025-07-05 23:18:54 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:54.937066 | orchestrator | 2025-07-05 23:18:54 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:54.937702 | orchestrator | 2025-07-05 23:18:54 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state STARTED 2025-07-05 23:18:54.937729 | orchestrator | 2025-07-05 23:18:54 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:18:57.972376 | orchestrator | 2025-07-05 23:18:57 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:18:57.972462 | orchestrator | 2025-07-05 23:18:57 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:18:57.973311 | orchestrator | 2025-07-05 23:18:57 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:18:57.974932 | orchestrator | 2025-07-05 23:18:57 | INFO  | Task 1c8115a9-4dcd-4e9a-a245-dcb35fbcaf69 is in state SUCCESS 2025-07-05 23:18:57.976304 | orchestrator | 2025-07-05 23:18:57.976327 | orchestrator | 2025-07-05 23:18:57.976336 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:18:57.976345 | orchestrator | 2025-07-05 23:18:57.976353 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:18:57.976362 | orchestrator | Saturday 05 July 2025 23:17:21 +0000 (0:00:00.179) 0:00:00.179 ********* 2025-07-05 23:18:57.976370 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:18:57.976379 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:18:57.976387 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:18:57.976394 | orchestrator | 2025-07-05 23:18:57.976402 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:18:57.976410 | orchestrator | Saturday 05 July 2025 23:17:22 +0000 (0:00:00.293) 0:00:00.473 ********* 2025-07-05 23:18:57.976417 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-07-05 23:18:57.976425 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-07-05 23:18:57.976433 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-07-05 23:18:57.976441 | orchestrator | 2025-07-05 23:18:57.976448 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-07-05 23:18:57.976456 | orchestrator | 2025-07-05 23:18:57.976464 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-07-05 23:18:57.976471 | orchestrator | Saturday 05 July 2025 23:17:22 +0000 (0:00:00.530) 0:00:01.003 ********* 2025-07-05 23:18:57.976479 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:18:57.976487 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:18:57.976494 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:18:57.976502 | orchestrator | 2025-07-05 23:18:57.976509 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:18:57.976518 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:18:57.976527 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:18:57.976535 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:18:57.976543 | orchestrator | 2025-07-05 23:18:57.976550 | orchestrator | 2025-07-05 23:18:57.976579 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:18:57.976587 | orchestrator | Saturday 05 July 2025 23:17:23 +0000 (0:00:00.749) 0:00:01.753 ********* 2025-07-05 23:18:57.976608 | orchestrator | =============================================================================== 2025-07-05 23:18:57.976616 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.75s 2025-07-05 23:18:57.976624 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-07-05 23:18:57.976657 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-07-05 23:18:57.976665 | orchestrator | 2025-07-05 23:18:57.976672 | orchestrator | 2025-07-05 23:18:57.976680 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:18:57.976687 | orchestrator | 2025-07-05 23:18:57.976694 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:18:57.976701 | orchestrator | Saturday 05 July 2025 23:17:09 +0000 (0:00:00.260) 0:00:00.260 ********* 2025-07-05 23:18:57.976709 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:18:57.976716 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:18:57.976724 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:18:57.976731 | orchestrator | 2025-07-05 23:18:57.976738 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:18:57.976746 | orchestrator | Saturday 05 July 2025 23:17:09 +0000 (0:00:00.291) 0:00:00.551 ********* 2025-07-05 23:18:57.976753 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-07-05 23:18:57.976760 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-07-05 23:18:57.976768 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-07-05 23:18:57.976775 | orchestrator | 2025-07-05 23:18:57.976782 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-07-05 23:18:57.976790 | orchestrator | 2025-07-05 23:18:57.976797 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-05 23:18:57.976804 | orchestrator | Saturday 05 July 2025 23:17:09 +0000 (0:00:00.385) 0:00:00.937 ********* 2025-07-05 23:18:57.976846 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:18:57.976855 | orchestrator | 2025-07-05 23:18:57.976862 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-07-05 23:18:57.976870 | orchestrator | Saturday 05 July 2025 23:17:10 +0000 (0:00:00.550) 0:00:01.487 ********* 2025-07-05 23:18:57.976878 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-07-05 23:18:57.976886 | orchestrator | 2025-07-05 23:18:57.976893 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-07-05 23:18:57.976901 | orchestrator | Saturday 05 July 2025 23:17:13 +0000 (0:00:02.842) 0:00:04.330 ********* 2025-07-05 23:18:57.976909 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-07-05 23:18:57.976918 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-07-05 23:18:57.976927 | orchestrator | 2025-07-05 23:18:57.976936 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-07-05 23:18:57.976945 | orchestrator | Saturday 05 July 2025 23:17:18 +0000 (0:00:05.565) 0:00:09.895 ********* 2025-07-05 23:18:57.976954 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-05 23:18:57.976962 | orchestrator | 2025-07-05 23:18:57.976971 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-07-05 23:18:57.976979 | orchestrator | Saturday 05 July 2025 23:17:22 +0000 (0:00:03.277) 0:00:13.172 ********* 2025-07-05 23:18:57.976996 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-05 23:18:57.977006 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-07-05 23:18:57.977014 | orchestrator | 2025-07-05 23:18:57.977023 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-07-05 23:18:57.977038 | orchestrator | Saturday 05 July 2025 23:17:26 +0000 (0:00:04.142) 0:00:17.315 ********* 2025-07-05 23:18:57.977047 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-05 23:18:57.977055 | orchestrator | 2025-07-05 23:18:57.977064 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-07-05 23:18:57.977073 | orchestrator | Saturday 05 July 2025 23:17:29 +0000 (0:00:03.233) 0:00:20.548 ********* 2025-07-05 23:18:57.977081 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-07-05 23:18:57.977089 | orchestrator | 2025-07-05 23:18:57.977098 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-07-05 23:18:57.977106 | orchestrator | Saturday 05 July 2025 23:17:33 +0000 (0:00:04.079) 0:00:24.628 ********* 2025-07-05 23:18:57.977115 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:18:57.977123 | orchestrator | 2025-07-05 23:18:57.977132 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-07-05 23:18:57.977140 | orchestrator | Saturday 05 July 2025 23:17:36 +0000 (0:00:03.246) 0:00:27.875 ********* 2025-07-05 23:18:57.977149 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:18:57.977157 | orchestrator | 2025-07-05 23:18:57.977166 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-07-05 23:18:57.977174 | orchestrator | Saturday 05 July 2025 23:17:40 +0000 (0:00:03.872) 0:00:31.747 ********* 2025-07-05 23:18:57.977183 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:18:57.977191 | orchestrator | 2025-07-05 23:18:57.977200 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-07-05 23:18:57.977209 | orchestrator | Saturday 05 July 2025 23:17:44 +0000 (0:00:03.733) 0:00:35.480 ********* 2025-07-05 23:18:57.977224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.977237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.977247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.977268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.977278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.977290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.977299 | orchestrator | 2025-07-05 23:18:57.977308 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-07-05 23:18:57.977316 | orchestrator | Saturday 05 July 2025 23:17:45 +0000 (0:00:01.370) 0:00:36.851 ********* 2025-07-05 23:18:57.977324 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:18:57.977331 | orchestrator | 2025-07-05 23:18:57.977339 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-07-05 23:18:57.977347 | orchestrator | Saturday 05 July 2025 23:17:45 +0000 (0:00:00.142) 0:00:36.994 ********* 2025-07-05 23:18:57.977355 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:18:57.977362 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:18:57.977370 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:18:57.977378 | orchestrator | 2025-07-05 23:18:57.977385 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-07-05 23:18:57.977393 | orchestrator | Saturday 05 July 2025 23:17:46 +0000 (0:00:00.508) 0:00:37.502 ********* 2025-07-05 23:18:57.977401 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 23:18:57.977409 | orchestrator | 2025-07-05 23:18:57.977416 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-07-05 23:18:57.977424 | orchestrator | Saturday 05 July 2025 23:17:47 +0000 (0:00:00.847) 0:00:38.349 ********* 2025-07-05 23:18:57.977432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.977452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.977460 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:18:57.977468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.977480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.977489 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:18:57.977497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.977510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.977519 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:18:57.977526 | orchestrator | 2025-07-05 23:18:57.977534 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-07-05 23:18:57.977542 | orchestrator | Saturday 05 July 2025 23:17:47 +0000 (0:00:00.584) 0:00:38.933 ********* 2025-07-05 23:18:57.977550 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:18:57.977562 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:18:57.977570 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:18:57.977577 | orchestrator | 2025-07-05 23:18:57.977585 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-05 23:18:57.977592 | orchestrator | Saturday 05 July 2025 23:17:48 +0000 (0:00:00.274) 0:00:39.208 ********* 2025-07-05 23:18:57.977600 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:18:57.977608 | orchestrator | 2025-07-05 23:18:57.977615 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-07-05 23:18:57.977623 | orchestrator | Saturday 05 July 2025 23:17:48 +0000 (0:00:00.661) 0:00:39.870 ********* 2025-07-05 23:18:57.977655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.977668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.977682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.977690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.977705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.977713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.977721 | orchestrator | 2025-07-05 23:18:57.977729 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-07-05 23:18:57.977737 | orchestrator | Saturday 05 July 2025 23:17:51 +0000 (0:00:02.401) 0:00:42.271 ********* 2025-07-05 23:18:57.977749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.977763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.977772 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:18:57.977780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.977793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.977802 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:18:57.977810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.977822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.977839 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:18:57.977847 | orchestrator | 2025-07-05 23:18:57.977855 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-07-05 23:18:57.977863 | orchestrator | Saturday 05 July 2025 23:17:51 +0000 (0:00:00.598) 0:00:42.870 ********* 2025-07-05 23:18:57.977871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.977885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.977893 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:18:57.977901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.977913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.977926 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:18:57.977935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.977943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.977951 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:18:57.977959 | orchestrator | 2025-07-05 23:18:57.977967 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-07-05 23:18:57.977975 | orchestrator | Saturday 05 July 2025 23:17:52 +0000 (0:00:01.139) 0:00:44.009 ********* 2025-07-05 23:18:57.978178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.978193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.978212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.978221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.978230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.978244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.978252 | orchestrator | 2025-07-05 23:18:57.978260 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-07-05 23:18:57.978268 | orchestrator | Saturday 05 July 2025 23:17:55 +0000 (0:00:02.679) 0:00:46.689 ********* 2025-07-05 23:18:57.978277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.978293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.978302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.978311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.978324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.978332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.978345 | orchestrator | 2025-07-05 23:18:57.978353 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-07-05 23:18:57.978361 | orchestrator | Saturday 05 July 2025 23:18:02 +0000 (0:00:06.992) 0:00:53.681 ********* 2025-07-05 23:18:57.978372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.978381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.978389 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:18:57.978398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.978410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.978418 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:18:57.978431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-05 23:18:57.978443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:18:57.978451 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:18:57.978459 | orchestrator | 2025-07-05 23:18:57.978467 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-07-05 23:18:57.978475 | orchestrator | Saturday 05 July 2025 23:18:03 +0000 (0:00:00.692) 0:00:54.374 ********* 2025-07-05 23:18:57.978483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.978496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.978505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-05 23:18:57.978521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.978529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.978538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:18:57.978546 | orchestrator | 2025-07-05 23:18:57.978554 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-05 23:18:57.978562 | orchestrator | Saturday 05 July 2025 23:18:05 +0000 (0:00:02.035) 0:00:56.409 ********* 2025-07-05 23:18:57.978570 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:18:57.978577 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:18:57.978585 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:18:57.978593 | orchestrator | 2025-07-05 23:18:57.978600 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-07-05 23:18:57.978608 | orchestrator | Saturday 05 July 2025 23:18:05 +0000 (0:00:00.289) 0:00:56.698 ********* 2025-07-05 23:18:57.978615 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:18:57.978623 | orchestrator | 2025-07-05 23:18:57.978647 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-07-05 23:18:57.978655 | orchestrator | Saturday 05 July 2025 23:18:07 +0000 (0:00:02.137) 0:00:58.836 ********* 2025-07-05 23:18:57.978663 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:18:57.978676 | orchestrator | 2025-07-05 23:18:57.978688 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-07-05 23:18:57.978696 | orchestrator | Saturday 05 July 2025 23:18:09 +0000 (0:00:02.192) 0:01:01.029 ********* 2025-07-05 23:18:57.978704 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:18:57.978711 | orchestrator | 2025-07-05 23:18:57.978719 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-05 23:18:57.978727 | orchestrator | Saturday 05 July 2025 23:18:27 +0000 (0:00:18.019) 0:01:19.048 ********* 2025-07-05 23:18:57.978735 | orchestrator | 2025-07-05 23:18:57.978742 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-05 23:18:57.978750 | orchestrator | Saturday 05 July 2025 23:18:28 +0000 (0:00:00.065) 0:01:19.113 ********* 2025-07-05 23:18:57.978757 | orchestrator | 2025-07-05 23:18:57.978765 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-05 23:18:57.978773 | orchestrator | Saturday 05 July 2025 23:18:28 +0000 (0:00:00.066) 0:01:19.180 ********* 2025-07-05 23:18:57.978780 | orchestrator | 2025-07-05 23:18:57.978788 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-07-05 23:18:57.978795 | orchestrator | Saturday 05 July 2025 23:18:28 +0000 (0:00:00.140) 0:01:19.320 ********* 2025-07-05 23:18:57.978803 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:18:57.978810 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:18:57.978818 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:18:57.978825 | orchestrator | 2025-07-05 23:18:57.978833 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-07-05 23:18:57.978840 | orchestrator | Saturday 05 July 2025 23:18:43 +0000 (0:00:15.007) 0:01:34.328 ********* 2025-07-05 23:18:57.978848 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:18:57.978855 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:18:57.978863 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:18:57.978870 | orchestrator | 2025-07-05 23:18:57.978878 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:18:57.978886 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-05 23:18:57.978894 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-05 23:18:57.978906 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-05 23:18:57.978913 | orchestrator | 2025-07-05 23:18:57.978921 | orchestrator | 2025-07-05 23:18:57.978929 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:18:57.978936 | orchestrator | Saturday 05 July 2025 23:18:54 +0000 (0:00:11.577) 0:01:45.906 ********* 2025-07-05 23:18:57.978944 | orchestrator | =============================================================================== 2025-07-05 23:18:57.978951 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.02s 2025-07-05 23:18:57.978959 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.01s 2025-07-05 23:18:57.978966 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.58s 2025-07-05 23:18:57.978974 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.99s 2025-07-05 23:18:57.978981 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.57s 2025-07-05 23:18:57.978989 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.14s 2025-07-05 23:18:57.978996 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.08s 2025-07-05 23:18:57.979004 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.87s 2025-07-05 23:18:57.979011 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.73s 2025-07-05 23:18:57.979018 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.28s 2025-07-05 23:18:57.979044 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.25s 2025-07-05 23:18:57.979051 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.23s 2025-07-05 23:18:57.979059 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 2.84s 2025-07-05 23:18:57.979066 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.68s 2025-07-05 23:18:57.979074 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.40s 2025-07-05 23:18:57.979081 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.19s 2025-07-05 23:18:57.979089 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.14s 2025-07-05 23:18:57.979096 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.04s 2025-07-05 23:18:57.979104 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.37s 2025-07-05 23:18:57.979111 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.14s 2025-07-05 23:18:57.979119 | orchestrator | 2025-07-05 23:18:57 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:01.031245 | orchestrator | 2025-07-05 23:19:01 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:19:01.032150 | orchestrator | 2025-07-05 23:19:01 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:01.033700 | orchestrator | 2025-07-05 23:19:01 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:01.033853 | orchestrator | 2025-07-05 23:19:01 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:04.071957 | orchestrator | 2025-07-05 23:19:04 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:19:04.074118 | orchestrator | 2025-07-05 23:19:04 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:04.075825 | orchestrator | 2025-07-05 23:19:04 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:04.075842 | orchestrator | 2025-07-05 23:19:04 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:07.107801 | orchestrator | 2025-07-05 23:19:07 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:19:07.110350 | orchestrator | 2025-07-05 23:19:07 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:07.112144 | orchestrator | 2025-07-05 23:19:07 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:07.112164 | orchestrator | 2025-07-05 23:19:07 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:10.144662 | orchestrator | 2025-07-05 23:19:10 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:19:10.145260 | orchestrator | 2025-07-05 23:19:10 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:10.146389 | orchestrator | 2025-07-05 23:19:10 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:10.146420 | orchestrator | 2025-07-05 23:19:10 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:13.192745 | orchestrator | 2025-07-05 23:19:13 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:19:13.193313 | orchestrator | 2025-07-05 23:19:13 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:13.195485 | orchestrator | 2025-07-05 23:19:13 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:13.195528 | orchestrator | 2025-07-05 23:19:13 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:16.238859 | orchestrator | 2025-07-05 23:19:16 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:19:16.238986 | orchestrator | 2025-07-05 23:19:16 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:16.239002 | orchestrator | 2025-07-05 23:19:16 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:16.239014 | orchestrator | 2025-07-05 23:19:16 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:19.282151 | orchestrator | 2025-07-05 23:19:19 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:19:19.283872 | orchestrator | 2025-07-05 23:19:19 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:19.286488 | orchestrator | 2025-07-05 23:19:19 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:19.286577 | orchestrator | 2025-07-05 23:19:19 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:22.334178 | orchestrator | 2025-07-05 23:19:22 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:19:22.335449 | orchestrator | 2025-07-05 23:19:22 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:22.338474 | orchestrator | 2025-07-05 23:19:22 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:22.338558 | orchestrator | 2025-07-05 23:19:22 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:25.379183 | orchestrator | 2025-07-05 23:19:25 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:19:25.380421 | orchestrator | 2025-07-05 23:19:25 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:25.382710 | orchestrator | 2025-07-05 23:19:25 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:25.382749 | orchestrator | 2025-07-05 23:19:25 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:28.432921 | orchestrator | 2025-07-05 23:19:28 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state STARTED 2025-07-05 23:19:28.433350 | orchestrator | 2025-07-05 23:19:28 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:28.434418 | orchestrator | 2025-07-05 23:19:28 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:28.434442 | orchestrator | 2025-07-05 23:19:28 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:31.477459 | orchestrator | 2025-07-05 23:19:31 | INFO  | Task c4c2b9a2-6333-46d0-94bb-0191278dfe32 is in state SUCCESS 2025-07-05 23:19:31.479198 | orchestrator | 2025-07-05 23:19:31.479244 | orchestrator | 2025-07-05 23:19:31.479258 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:19:31.479271 | orchestrator | 2025-07-05 23:19:31.479284 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-07-05 23:19:31.479296 | orchestrator | Saturday 05 July 2025 23:10:41 +0000 (0:00:00.447) 0:00:00.448 ********* 2025-07-05 23:19:31.479366 | orchestrator | changed: [testbed-manager] 2025-07-05 23:19:31.479380 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.479391 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:19:31.479402 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:19:31.479414 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.479425 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.479436 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.479447 | orchestrator | 2025-07-05 23:19:31.479459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:19:31.479763 | orchestrator | Saturday 05 July 2025 23:10:43 +0000 (0:00:02.066) 0:00:02.514 ********* 2025-07-05 23:19:31.479794 | orchestrator | changed: [testbed-manager] 2025-07-05 23:19:31.479807 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.479820 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:19:31.479832 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:19:31.479844 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.479856 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.479869 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.479881 | orchestrator | 2025-07-05 23:19:31.479894 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:19:31.479906 | orchestrator | Saturday 05 July 2025 23:10:44 +0000 (0:00:01.316) 0:00:03.830 ********* 2025-07-05 23:19:31.479919 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-07-05 23:19:31.479932 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-07-05 23:19:31.479944 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-07-05 23:19:31.479957 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-07-05 23:19:31.479969 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-07-05 23:19:31.479982 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-07-05 23:19:31.479994 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-07-05 23:19:31.480006 | orchestrator | 2025-07-05 23:19:31.480032 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-07-05 23:19:31.480045 | orchestrator | 2025-07-05 23:19:31.480058 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-05 23:19:31.480070 | orchestrator | Saturday 05 July 2025 23:10:46 +0000 (0:00:01.753) 0:00:05.584 ********* 2025-07-05 23:19:31.480082 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:19:31.480094 | orchestrator | 2025-07-05 23:19:31.480107 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-07-05 23:19:31.480120 | orchestrator | Saturday 05 July 2025 23:10:47 +0000 (0:00:01.163) 0:00:06.747 ********* 2025-07-05 23:19:31.480132 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-07-05 23:19:31.480143 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-07-05 23:19:31.480154 | orchestrator | 2025-07-05 23:19:31.480181 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-07-05 23:19:31.480194 | orchestrator | Saturday 05 July 2025 23:10:51 +0000 (0:00:04.121) 0:00:10.868 ********* 2025-07-05 23:19:31.480205 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-05 23:19:31.480230 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-05 23:19:31.480242 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.480253 | orchestrator | 2025-07-05 23:19:31.480264 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-05 23:19:31.480275 | orchestrator | Saturday 05 July 2025 23:10:56 +0000 (0:00:04.363) 0:00:15.232 ********* 2025-07-05 23:19:31.480286 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.480296 | orchestrator | 2025-07-05 23:19:31.480307 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-07-05 23:19:31.480318 | orchestrator | Saturday 05 July 2025 23:10:56 +0000 (0:00:00.807) 0:00:16.040 ********* 2025-07-05 23:19:31.480329 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.480340 | orchestrator | 2025-07-05 23:19:31.480420 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-07-05 23:19:31.480442 | orchestrator | Saturday 05 July 2025 23:10:58 +0000 (0:00:01.884) 0:00:17.924 ********* 2025-07-05 23:19:31.480508 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.480527 | orchestrator | 2025-07-05 23:19:31.480545 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-05 23:19:31.480563 | orchestrator | Saturday 05 July 2025 23:11:01 +0000 (0:00:03.111) 0:00:21.035 ********* 2025-07-05 23:19:31.480582 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.480640 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.480759 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.480778 | orchestrator | 2025-07-05 23:19:31.480799 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-05 23:19:31.480819 | orchestrator | Saturday 05 July 2025 23:11:02 +0000 (0:00:00.705) 0:00:21.741 ********* 2025-07-05 23:19:31.480838 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:31.480857 | orchestrator | 2025-07-05 23:19:31.480875 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-07-05 23:19:31.480894 | orchestrator | Saturday 05 July 2025 23:11:33 +0000 (0:00:30.603) 0:00:52.344 ********* 2025-07-05 23:19:31.480914 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.480931 | orchestrator | 2025-07-05 23:19:31.480943 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-05 23:19:31.480955 | orchestrator | Saturday 05 July 2025 23:11:47 +0000 (0:00:14.002) 0:01:06.347 ********* 2025-07-05 23:19:31.480966 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:31.480977 | orchestrator | 2025-07-05 23:19:31.480988 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-05 23:19:31.480999 | orchestrator | Saturday 05 July 2025 23:11:59 +0000 (0:00:11.961) 0:01:18.309 ********* 2025-07-05 23:19:31.481028 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:31.481040 | orchestrator | 2025-07-05 23:19:31.481052 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-07-05 23:19:31.481063 | orchestrator | Saturday 05 July 2025 23:12:00 +0000 (0:00:01.033) 0:01:19.343 ********* 2025-07-05 23:19:31.481074 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.481085 | orchestrator | 2025-07-05 23:19:31.481096 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-05 23:19:31.481107 | orchestrator | Saturday 05 July 2025 23:12:00 +0000 (0:00:00.468) 0:01:19.811 ********* 2025-07-05 23:19:31.481119 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:19:31.481130 | orchestrator | 2025-07-05 23:19:31.481141 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-05 23:19:31.481152 | orchestrator | Saturday 05 July 2025 23:12:01 +0000 (0:00:00.545) 0:01:20.356 ********* 2025-07-05 23:19:31.481163 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:31.481174 | orchestrator | 2025-07-05 23:19:31.481185 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-05 23:19:31.481196 | orchestrator | Saturday 05 July 2025 23:12:19 +0000 (0:00:18.583) 0:01:38.940 ********* 2025-07-05 23:19:31.481207 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.481218 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.481229 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.481240 | orchestrator | 2025-07-05 23:19:31.481251 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-07-05 23:19:31.481262 | orchestrator | 2025-07-05 23:19:31.481273 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-05 23:19:31.481285 | orchestrator | Saturday 05 July 2025 23:12:20 +0000 (0:00:00.722) 0:01:39.663 ********* 2025-07-05 23:19:31.481296 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:19:31.481307 | orchestrator | 2025-07-05 23:19:31.481318 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-07-05 23:19:31.481329 | orchestrator | Saturday 05 July 2025 23:12:22 +0000 (0:00:02.414) 0:01:42.077 ********* 2025-07-05 23:19:31.481340 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.481351 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.481370 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.481381 | orchestrator | 2025-07-05 23:19:31.481392 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-07-05 23:19:31.481404 | orchestrator | Saturday 05 July 2025 23:12:25 +0000 (0:00:02.271) 0:01:44.349 ********* 2025-07-05 23:19:31.481426 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.481437 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.481448 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.481459 | orchestrator | 2025-07-05 23:19:31.481470 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-05 23:19:31.481481 | orchestrator | Saturday 05 July 2025 23:12:27 +0000 (0:00:02.326) 0:01:46.675 ********* 2025-07-05 23:19:31.481493 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.481503 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.481514 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.481525 | orchestrator | 2025-07-05 23:19:31.481536 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-05 23:19:31.481548 | orchestrator | Saturday 05 July 2025 23:12:28 +0000 (0:00:00.545) 0:01:47.221 ********* 2025-07-05 23:19:31.481564 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-05 23:19:31.481583 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.481601 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-05 23:19:31.481645 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.481662 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-05 23:19:31.481673 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-07-05 23:19:31.481684 | orchestrator | 2025-07-05 23:19:31.481695 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-05 23:19:31.481706 | orchestrator | Saturday 05 July 2025 23:12:37 +0000 (0:00:09.007) 0:01:56.228 ********* 2025-07-05 23:19:31.481717 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.481728 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.481739 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.481750 | orchestrator | 2025-07-05 23:19:31.481761 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-05 23:19:31.481772 | orchestrator | Saturday 05 July 2025 23:12:37 +0000 (0:00:00.412) 0:01:56.641 ********* 2025-07-05 23:19:31.481783 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-05 23:19:31.481794 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.481805 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-05 23:19:31.481816 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.481827 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-05 23:19:31.481837 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.481848 | orchestrator | 2025-07-05 23:19:31.481860 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-05 23:19:31.481871 | orchestrator | Saturday 05 July 2025 23:12:38 +0000 (0:00:01.052) 0:01:57.693 ********* 2025-07-05 23:19:31.481881 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.481892 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.481903 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.481914 | orchestrator | 2025-07-05 23:19:31.481925 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-07-05 23:19:31.481936 | orchestrator | Saturday 05 July 2025 23:12:39 +0000 (0:00:00.432) 0:01:58.126 ********* 2025-07-05 23:19:31.481947 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.481958 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.481969 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.481980 | orchestrator | 2025-07-05 23:19:31.481991 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-07-05 23:19:31.482002 | orchestrator | Saturday 05 July 2025 23:12:40 +0000 (0:00:01.101) 0:01:59.227 ********* 2025-07-05 23:19:31.482064 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.482079 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.482100 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.482112 | orchestrator | 2025-07-05 23:19:31.482123 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-07-05 23:19:31.482134 | orchestrator | Saturday 05 July 2025 23:12:43 +0000 (0:00:03.396) 0:02:02.624 ********* 2025-07-05 23:19:31.482155 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.482166 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.482177 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:31.482188 | orchestrator | 2025-07-05 23:19:31.482199 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-05 23:19:31.482210 | orchestrator | Saturday 05 July 2025 23:13:03 +0000 (0:00:20.473) 0:02:23.098 ********* 2025-07-05 23:19:31.482221 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.482232 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.482243 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:31.482254 | orchestrator | 2025-07-05 23:19:31.482265 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-05 23:19:31.482276 | orchestrator | Saturday 05 July 2025 23:13:15 +0000 (0:00:11.843) 0:02:34.941 ********* 2025-07-05 23:19:31.482287 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:31.482297 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.482308 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.482319 | orchestrator | 2025-07-05 23:19:31.482330 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-07-05 23:19:31.482341 | orchestrator | Saturday 05 July 2025 23:13:16 +0000 (0:00:00.814) 0:02:35.755 ********* 2025-07-05 23:19:31.482352 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.482363 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.482374 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.482385 | orchestrator | 2025-07-05 23:19:31.482396 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-07-05 23:19:31.482407 | orchestrator | Saturday 05 July 2025 23:13:28 +0000 (0:00:11.845) 0:02:47.601 ********* 2025-07-05 23:19:31.482418 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.482429 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.482439 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.482450 | orchestrator | 2025-07-05 23:19:31.482462 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-05 23:19:31.482473 | orchestrator | Saturday 05 July 2025 23:13:30 +0000 (0:00:01.543) 0:02:49.145 ********* 2025-07-05 23:19:31.482490 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.482501 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.482512 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.482523 | orchestrator | 2025-07-05 23:19:31.482534 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-07-05 23:19:31.482545 | orchestrator | 2025-07-05 23:19:31.482556 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-05 23:19:31.482567 | orchestrator | Saturday 05 July 2025 23:13:30 +0000 (0:00:00.318) 0:02:49.464 ********* 2025-07-05 23:19:31.482578 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:19:31.482590 | orchestrator | 2025-07-05 23:19:31.482601 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-07-05 23:19:31.482612 | orchestrator | Saturday 05 July 2025 23:13:30 +0000 (0:00:00.518) 0:02:49.982 ********* 2025-07-05 23:19:31.482640 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-07-05 23:19:31.482651 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-07-05 23:19:31.482662 | orchestrator | 2025-07-05 23:19:31.482673 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-07-05 23:19:31.482685 | orchestrator | Saturday 05 July 2025 23:13:34 +0000 (0:00:03.261) 0:02:53.244 ********* 2025-07-05 23:19:31.482696 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-07-05 23:19:31.482709 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-07-05 23:19:31.482720 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-07-05 23:19:31.482737 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-07-05 23:19:31.482749 | orchestrator | 2025-07-05 23:19:31.482760 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-07-05 23:19:31.482771 | orchestrator | Saturday 05 July 2025 23:13:40 +0000 (0:00:06.678) 0:02:59.923 ********* 2025-07-05 23:19:31.482783 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-05 23:19:31.482820 | orchestrator | 2025-07-05 23:19:31.482832 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-07-05 23:19:31.482843 | orchestrator | Saturday 05 July 2025 23:13:44 +0000 (0:00:03.318) 0:03:03.241 ********* 2025-07-05 23:19:31.482854 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-05 23:19:31.482865 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-07-05 23:19:31.482901 | orchestrator | 2025-07-05 23:19:31.482912 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-07-05 23:19:31.482923 | orchestrator | Saturday 05 July 2025 23:13:48 +0000 (0:00:03.943) 0:03:07.185 ********* 2025-07-05 23:19:31.482934 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-05 23:19:31.482945 | orchestrator | 2025-07-05 23:19:31.482956 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-07-05 23:19:31.482967 | orchestrator | Saturday 05 July 2025 23:13:51 +0000 (0:00:03.663) 0:03:10.848 ********* 2025-07-05 23:19:31.482978 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-07-05 23:19:31.482989 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-07-05 23:19:31.483000 | orchestrator | 2025-07-05 23:19:31.483011 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-05 23:19:31.483028 | orchestrator | Saturday 05 July 2025 23:13:59 +0000 (0:00:08.209) 0:03:19.058 ********* 2025-07-05 23:19:31.483047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.483071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.483093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.483116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.483129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.483142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.483155 | orchestrator | 2025-07-05 23:19:31.483166 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-07-05 23:19:31.483178 | orchestrator | Saturday 05 July 2025 23:14:01 +0000 (0:00:01.808) 0:03:20.867 ********* 2025-07-05 23:19:31.483194 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.483205 | orchestrator | 2025-07-05 23:19:31.483216 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-07-05 23:19:31.483228 | orchestrator | Saturday 05 July 2025 23:14:01 +0000 (0:00:00.108) 0:03:20.975 ********* 2025-07-05 23:19:31.483239 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.483261 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.483272 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.483283 | orchestrator | 2025-07-05 23:19:31.483295 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-07-05 23:19:31.483306 | orchestrator | Saturday 05 July 2025 23:14:02 +0000 (0:00:00.620) 0:03:21.596 ********* 2025-07-05 23:19:31.483316 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 23:19:31.483327 | orchestrator | 2025-07-05 23:19:31.483338 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-07-05 23:19:31.483350 | orchestrator | Saturday 05 July 2025 23:14:03 +0000 (0:00:01.205) 0:03:22.801 ********* 2025-07-05 23:19:31.483361 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.483371 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.483382 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.483393 | orchestrator | 2025-07-05 23:19:31.483404 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-05 23:19:31.483416 | orchestrator | Saturday 05 July 2025 23:14:04 +0000 (0:00:00.310) 0:03:23.112 ********* 2025-07-05 23:19:31.483446 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:19:31.483458 | orchestrator | 2025-07-05 23:19:31.483469 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-05 23:19:31.483481 | orchestrator | Saturday 05 July 2025 23:14:05 +0000 (0:00:01.396) 0:03:24.508 ********* 2025-07-05 23:19:31.483493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.483526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.483553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.483567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.483580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.483598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.483611 | orchestrator | 2025-07-05 23:19:31.483650 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-05 23:19:31.483661 | orchestrator | Saturday 05 July 2025 23:14:08 +0000 (0:00:02.838) 0:03:27.347 ********* 2025-07-05 23:19:31.483674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:19:31.483706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.483719 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.483731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:19:31.483744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.483755 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.483776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:19:31.483801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.483813 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.483824 | orchestrator | 2025-07-05 23:19:31.483835 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-05 23:19:31.483847 | orchestrator | Saturday 05 July 2025 23:14:09 +0000 (0:00:01.738) 0:03:29.086 ********* 2025-07-05 23:19:31.483859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:19:31.483871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.483883 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.483904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:19:31.483924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.483936 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.483948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:19:31.483960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.483972 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.483983 | orchestrator | 2025-07-05 23:19:31.483994 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-07-05 23:19:31.484005 | orchestrator | Saturday 05 July 2025 23:14:12 +0000 (0:00:02.310) 0:03:31.396 ********* 2025-07-05 23:19:31.484025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.484079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.484094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.484106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.484126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.484145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.484157 | orchestrator | 2025-07-05 23:19:31.484169 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-07-05 23:19:31.484180 | orchestrator | Saturday 05 July 2025 23:14:15 +0000 (0:00:02.744) 0:03:34.141 ********* 2025-07-05 23:19:31.484196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.484209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.484229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.484248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.484265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.484277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.484289 | orchestrator | 2025-07-05 23:19:31.484300 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-07-05 23:19:31.484311 | orchestrator | Saturday 05 July 2025 23:14:23 +0000 (0:00:07.997) 0:03:42.139 ********* 2025-07-05 23:19:31.484323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:19:31.484341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.484360 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.484372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:19:31.484389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.484402 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.484414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-05 23:19:31.484426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.484444 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.484455 | orchestrator | 2025-07-05 23:19:31.484466 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-07-05 23:19:31.484477 | orchestrator | Saturday 05 July 2025 23:14:24 +0000 (0:00:01.138) 0:03:43.277 ********* 2025-07-05 23:19:31.484488 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:19:31.484500 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.484511 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:19:31.484522 | orchestrator | 2025-07-05 23:19:31.484539 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-07-05 23:19:31.484550 | orchestrator | Saturday 05 July 2025 23:14:26 +0000 (0:00:02.026) 0:03:45.304 ********* 2025-07-05 23:19:31.484562 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.484573 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.484584 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.484595 | orchestrator | 2025-07-05 23:19:31.484606 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-07-05 23:19:31.484636 | orchestrator | Saturday 05 July 2025 23:14:26 +0000 (0:00:00.329) 0:03:45.633 ********* 2025-07-05 23:19:31.484657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.484671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.484690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-05 23:19:31.484717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.484730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.484747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.484758 | orchestrator | 2025-07-05 23:19:31.484770 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-05 23:19:31.484781 | orchestrator | Saturday 05 July 2025 23:14:28 +0000 (0:00:02.358) 0:03:47.992 ********* 2025-07-05 23:19:31.484792 | orchestrator | 2025-07-05 23:19:31.484803 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-05 23:19:31.484815 | orchestrator | Saturday 05 July 2025 23:14:29 +0000 (0:00:00.249) 0:03:48.241 ********* 2025-07-05 23:19:31.484826 | orchestrator | 2025-07-05 23:19:31.484837 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-05 23:19:31.484848 | orchestrator | Saturday 05 July 2025 23:14:29 +0000 (0:00:00.256) 0:03:48.498 ********* 2025-07-05 23:19:31.484859 | orchestrator | 2025-07-05 23:19:31.484870 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-07-05 23:19:31.484881 | orchestrator | Saturday 05 July 2025 23:14:29 +0000 (0:00:00.395) 0:03:48.894 ********* 2025-07-05 23:19:31.484892 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.484903 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:19:31.484914 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:19:31.484931 | orchestrator | 2025-07-05 23:19:31.484942 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-07-05 23:19:31.484954 | orchestrator | Saturday 05 July 2025 23:14:49 +0000 (0:00:19.215) 0:04:08.110 ********* 2025-07-05 23:19:31.484965 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:19:31.484976 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.484987 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:19:31.484997 | orchestrator | 2025-07-05 23:19:31.485008 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-07-05 23:19:31.485019 | orchestrator | 2025-07-05 23:19:31.485030 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-05 23:19:31.485041 | orchestrator | Saturday 05 July 2025 23:15:00 +0000 (0:00:11.271) 0:04:19.382 ********* 2025-07-05 23:19:31.485053 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:19:31.485064 | orchestrator | 2025-07-05 23:19:31.485075 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-05 23:19:31.485086 | orchestrator | Saturday 05 July 2025 23:15:02 +0000 (0:00:01.749) 0:04:21.131 ********* 2025-07-05 23:19:31.485097 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.485108 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.485119 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.485130 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.485140 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.485151 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.485162 | orchestrator | 2025-07-05 23:19:31.485173 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-07-05 23:19:31.485184 | orchestrator | Saturday 05 July 2025 23:15:03 +0000 (0:00:01.716) 0:04:22.848 ********* 2025-07-05 23:19:31.485195 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.485206 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.485217 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.485228 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:19:31.485239 | orchestrator | 2025-07-05 23:19:31.485250 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-05 23:19:31.485266 | orchestrator | Saturday 05 July 2025 23:15:04 +0000 (0:00:01.106) 0:04:23.954 ********* 2025-07-05 23:19:31.485278 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-05 23:19:31.485289 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-05 23:19:31.485300 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-05 23:19:31.485311 | orchestrator | 2025-07-05 23:19:31.485322 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-05 23:19:31.485333 | orchestrator | Saturday 05 July 2025 23:15:06 +0000 (0:00:01.292) 0:04:25.247 ********* 2025-07-05 23:19:31.485344 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-05 23:19:31.485355 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-05 23:19:31.485366 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-05 23:19:31.485377 | orchestrator | 2025-07-05 23:19:31.485388 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-05 23:19:31.485399 | orchestrator | Saturday 05 July 2025 23:15:07 +0000 (0:00:01.593) 0:04:26.840 ********* 2025-07-05 23:19:31.485410 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-07-05 23:19:31.485421 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.485432 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-07-05 23:19:31.485443 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.485454 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-07-05 23:19:31.485465 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.485476 | orchestrator | 2025-07-05 23:19:31.485487 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-07-05 23:19:31.485504 | orchestrator | Saturday 05 July 2025 23:15:09 +0000 (0:00:01.510) 0:04:28.351 ********* 2025-07-05 23:19:31.485516 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-05 23:19:31.485526 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-05 23:19:31.485538 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-05 23:19:31.485548 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-05 23:19:31.485559 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.485575 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-05 23:19:31.485586 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-05 23:19:31.485597 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-05 23:19:31.485608 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.485668 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-05 23:19:31.485680 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-05 23:19:31.485691 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-05 23:19:31.485702 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.485713 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-05 23:19:31.485724 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-05 23:19:31.485735 | orchestrator | 2025-07-05 23:19:31.485746 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-07-05 23:19:31.485757 | orchestrator | Saturday 05 July 2025 23:15:10 +0000 (0:00:01.429) 0:04:29.781 ********* 2025-07-05 23:19:31.485768 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.485779 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.485790 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.485801 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.485812 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.485823 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.485834 | orchestrator | 2025-07-05 23:19:31.485845 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-07-05 23:19:31.485856 | orchestrator | Saturday 05 July 2025 23:15:12 +0000 (0:00:01.777) 0:04:31.559 ********* 2025-07-05 23:19:31.485867 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.485878 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.485888 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.485898 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.485908 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.485917 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.485927 | orchestrator | 2025-07-05 23:19:31.485937 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-05 23:19:31.485947 | orchestrator | Saturday 05 July 2025 23:15:14 +0000 (0:00:02.214) 0:04:33.774 ********* 2025-07-05 23:19:31.485963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.485983 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.485999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486065 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486078 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486154 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486225 | orchestrator | 2025-07-05 23:19:31.486235 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-05 23:19:31.486245 | orchestrator | Saturday 05 July 2025 23:15:18 +0000 (0:00:04.032) 0:04:37.806 ********* 2025-07-05 23:19:31.486255 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:19:31.486266 | orchestrator | 2025-07-05 23:19:31.486281 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-05 23:19:31.486291 | orchestrator | Saturday 05 July 2025 23:15:19 +0000 (0:00:01.036) 0:04:38.843 ********* 2025-07-05 23:19:31.486301 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486435 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486484 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.486500 | orchestrator | 2025-07-05 23:19:31.486510 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-05 23:19:31.486520 | orchestrator | Saturday 05 July 2025 23:15:24 +0000 (0:00:05.075) 0:04:43.918 ********* 2025-07-05 23:19:31.486537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.486548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.486562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.486573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.486589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.486605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.486631 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.486642 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.486653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.486668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.486679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.486689 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.486812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-05 23:19:31.486824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.486834 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.486854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-05 23:19:31.486865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.486875 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.486885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-05 23:19:31.486901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.486912 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.486922 | orchestrator | 2025-07-05 23:19:31.486932 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-05 23:19:31.486948 | orchestrator | Saturday 05 July 2025 23:15:27 +0000 (0:00:03.087) 0:04:47.006 ********* 2025-07-05 23:19:31.486958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.486969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.486986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.486996 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.487007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.487022 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.487033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.487049 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.487059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.487075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.487086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.487096 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.487111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-05 23:19:31.487121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.487137 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.487147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-05 23:19:31.487158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.487168 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.487178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-05 23:19:31.487194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.487205 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.487215 | orchestrator | 2025-07-05 23:19:31.487225 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-05 23:19:31.487235 | orchestrator | Saturday 05 July 2025 23:15:29 +0000 (0:00:02.083) 0:04:49.089 ********* 2025-07-05 23:19:31.487245 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.487255 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.487265 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.487308 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-05 23:19:31.487321 | orchestrator | 2025-07-05 23:19:31.487331 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-07-05 23:19:31.487341 | orchestrator | Saturday 05 July 2025 23:15:30 +0000 (0:00:00.651) 0:04:49.740 ********* 2025-07-05 23:19:31.487351 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-05 23:19:31.487361 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-05 23:19:31.487370 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-05 23:19:31.487387 | orchestrator | 2025-07-05 23:19:31.487397 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-07-05 23:19:31.487407 | orchestrator | Saturday 05 July 2025 23:15:31 +0000 (0:00:01.173) 0:04:50.914 ********* 2025-07-05 23:19:31.487416 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-05 23:19:31.487426 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-05 23:19:31.487436 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-05 23:19:31.487446 | orchestrator | 2025-07-05 23:19:31.487460 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-07-05 23:19:31.487471 | orchestrator | Saturday 05 July 2025 23:15:32 +0000 (0:00:00.882) 0:04:51.797 ********* 2025-07-05 23:19:31.487480 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:19:31.487491 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:19:31.487500 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:19:31.487510 | orchestrator | 2025-07-05 23:19:31.487520 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-07-05 23:19:31.487530 | orchestrator | Saturday 05 July 2025 23:15:33 +0000 (0:00:00.482) 0:04:52.279 ********* 2025-07-05 23:19:31.487540 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:19:31.487550 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:19:31.487559 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:19:31.487569 | orchestrator | 2025-07-05 23:19:31.487579 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-07-05 23:19:31.487589 | orchestrator | Saturday 05 July 2025 23:15:33 +0000 (0:00:00.482) 0:04:52.762 ********* 2025-07-05 23:19:31.487599 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-05 23:19:31.487609 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-05 23:19:31.487671 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-05 23:19:31.487682 | orchestrator | 2025-07-05 23:19:31.487692 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-07-05 23:19:31.487702 | orchestrator | Saturday 05 July 2025 23:15:34 +0000 (0:00:01.321) 0:04:54.084 ********* 2025-07-05 23:19:31.487712 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-05 23:19:31.487722 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-05 23:19:31.487732 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-05 23:19:31.487742 | orchestrator | 2025-07-05 23:19:31.487751 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-07-05 23:19:31.487761 | orchestrator | Saturday 05 July 2025 23:15:36 +0000 (0:00:01.324) 0:04:55.409 ********* 2025-07-05 23:19:31.487771 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-05 23:19:31.487815 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-05 23:19:31.487827 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-05 23:19:31.487836 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-07-05 23:19:31.487846 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-07-05 23:19:31.487856 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-07-05 23:19:31.487866 | orchestrator | 2025-07-05 23:19:31.487876 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-07-05 23:19:31.487885 | orchestrator | Saturday 05 July 2025 23:15:40 +0000 (0:00:04.045) 0:04:59.454 ********* 2025-07-05 23:19:31.487895 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.487905 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.487915 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.487924 | orchestrator | 2025-07-05 23:19:31.487934 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-07-05 23:19:31.487944 | orchestrator | Saturday 05 July 2025 23:15:40 +0000 (0:00:00.328) 0:04:59.782 ********* 2025-07-05 23:19:31.487954 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.487964 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.487973 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.487983 | orchestrator | 2025-07-05 23:19:31.488001 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-07-05 23:19:31.488011 | orchestrator | Saturday 05 July 2025 23:15:40 +0000 (0:00:00.315) 0:05:00.098 ********* 2025-07-05 23:19:31.488021 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.488031 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.488041 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.488050 | orchestrator | 2025-07-05 23:19:31.488067 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-07-05 23:19:31.488078 | orchestrator | Saturday 05 July 2025 23:15:42 +0000 (0:00:01.560) 0:05:01.658 ********* 2025-07-05 23:19:31.488088 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-05 23:19:31.488099 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-05 23:19:31.488109 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-05 23:19:31.488119 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-05 23:19:31.488129 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-05 23:19:31.488139 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-05 23:19:31.488149 | orchestrator | 2025-07-05 23:19:31.488159 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-07-05 23:19:31.488169 | orchestrator | Saturday 05 July 2025 23:15:45 +0000 (0:00:03.282) 0:05:04.940 ********* 2025-07-05 23:19:31.488179 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-05 23:19:31.488188 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-05 23:19:31.488197 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-05 23:19:31.488205 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-05 23:19:31.488213 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.488220 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-05 23:19:31.488228 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.488244 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-05 23:19:31.488252 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.488260 | orchestrator | 2025-07-05 23:19:31.488269 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-07-05 23:19:31.488277 | orchestrator | Saturday 05 July 2025 23:15:49 +0000 (0:00:03.549) 0:05:08.489 ********* 2025-07-05 23:19:31.488285 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.488292 | orchestrator | 2025-07-05 23:19:31.488300 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-07-05 23:19:31.488309 | orchestrator | Saturday 05 July 2025 23:15:49 +0000 (0:00:00.121) 0:05:08.611 ********* 2025-07-05 23:19:31.488316 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.488324 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.488332 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.488340 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.488348 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.488356 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.488364 | orchestrator | 2025-07-05 23:19:31.488372 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-07-05 23:19:31.488380 | orchestrator | Saturday 05 July 2025 23:15:50 +0000 (0:00:00.795) 0:05:09.407 ********* 2025-07-05 23:19:31.488388 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-05 23:19:31.488396 | orchestrator | 2025-07-05 23:19:31.488404 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-07-05 23:19:31.488412 | orchestrator | Saturday 05 July 2025 23:15:51 +0000 (0:00:00.710) 0:05:10.118 ********* 2025-07-05 23:19:31.488425 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.488433 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.488441 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.488449 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.488457 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.488465 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.488473 | orchestrator | 2025-07-05 23:19:31.488481 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-07-05 23:19:31.488489 | orchestrator | Saturday 05 July 2025 23:15:51 +0000 (0:00:00.623) 0:05:10.741 ********* 2025-07-05 23:19:31.488498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488522 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488579 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488588 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488680 | orchestrator | 2025-07-05 23:19:31.488688 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-07-05 23:19:31.488696 | orchestrator | Saturday 05 July 2025 23:15:56 +0000 (0:00:04.377) 0:05:15.119 ********* 2025-07-05 23:19:31.488709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.488722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.488731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.488740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.488754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.488763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.488775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.488871 | orchestrator | 2025-07-05 23:19:31.488879 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-07-05 23:19:31.488887 | orchestrator | Saturday 05 July 2025 23:16:03 +0000 (0:00:07.213) 0:05:22.333 ********* 2025-07-05 23:19:31.488895 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.488904 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.488912 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.488920 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.488928 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.488935 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.488943 | orchestrator | 2025-07-05 23:19:31.488951 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-07-05 23:19:31.488959 | orchestrator | Saturday 05 July 2025 23:16:04 +0000 (0:00:01.258) 0:05:23.591 ********* 2025-07-05 23:19:31.488967 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-05 23:19:31.488975 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-05 23:19:31.488984 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-05 23:19:31.488996 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-05 23:19:31.489004 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-05 23:19:31.489012 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.489020 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-05 23:19:31.489029 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.489037 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-05 23:19:31.489045 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-05 23:19:31.489053 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-05 23:19:31.489061 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.489074 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-05 23:19:31.489083 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-05 23:19:31.489091 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-05 23:19:31.489099 | orchestrator | 2025-07-05 23:19:31.489107 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-07-05 23:19:31.489115 | orchestrator | Saturday 05 July 2025 23:16:07 +0000 (0:00:03.349) 0:05:26.940 ********* 2025-07-05 23:19:31.489123 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.489131 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.489139 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.489147 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.489155 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.489163 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.489170 | orchestrator | 2025-07-05 23:19:31.489178 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-07-05 23:19:31.489187 | orchestrator | Saturday 05 July 2025 23:16:08 +0000 (0:00:00.670) 0:05:27.611 ********* 2025-07-05 23:19:31.489199 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-05 23:19:31.489207 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-05 23:19:31.489215 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-05 23:19:31.489224 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-05 23:19:31.489232 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-05 23:19:31.489240 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-05 23:19:31.489248 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-05 23:19:31.489256 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-05 23:19:31.489264 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-05 23:19:31.489272 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-05 23:19:31.489280 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.489288 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-05 23:19:31.489296 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.489304 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-05 23:19:31.489312 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.489320 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-05 23:19:31.489328 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-05 23:19:31.489336 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-05 23:19:31.489344 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-05 23:19:31.489352 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-05 23:19:31.489360 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-05 23:19:31.489373 | orchestrator | 2025-07-05 23:19:31.489381 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-07-05 23:19:31.489389 | orchestrator | Saturday 05 July 2025 23:16:14 +0000 (0:00:05.825) 0:05:33.436 ********* 2025-07-05 23:19:31.489397 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-05 23:19:31.489405 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-05 23:19:31.489489 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-05 23:19:31.489499 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-05 23:19:31.489507 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-05 23:19:31.489516 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-05 23:19:31.489524 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-05 23:19:31.489532 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-05 23:19:31.489540 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-05 23:19:31.489548 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-05 23:19:31.489556 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-05 23:19:31.489564 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-05 23:19:31.489572 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-05 23:19:31.489580 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.489588 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-05 23:19:31.489596 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-05 23:19:31.489604 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.489612 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-05 23:19:31.489638 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-05 23:19:31.489646 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.489661 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-05 23:19:31.489670 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-05 23:19:31.489678 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-05 23:19:31.489686 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-05 23:19:31.489694 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-05 23:19:31.489702 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-05 23:19:31.489710 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-05 23:19:31.489718 | orchestrator | 2025-07-05 23:19:31.489726 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-07-05 23:19:31.489734 | orchestrator | Saturday 05 July 2025 23:16:22 +0000 (0:00:08.232) 0:05:41.669 ********* 2025-07-05 23:19:31.489742 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.489751 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.489759 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.489767 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.489775 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.489783 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.489797 | orchestrator | 2025-07-05 23:19:31.489805 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-07-05 23:19:31.489813 | orchestrator | Saturday 05 July 2025 23:16:23 +0000 (0:00:00.762) 0:05:42.432 ********* 2025-07-05 23:19:31.489821 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.489829 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.489837 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.489845 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.489853 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.489861 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.489869 | orchestrator | 2025-07-05 23:19:31.489877 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-07-05 23:19:31.489885 | orchestrator | Saturday 05 July 2025 23:16:24 +0000 (0:00:01.087) 0:05:43.519 ********* 2025-07-05 23:19:31.489893 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.489901 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.489909 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.489917 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.489925 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.489933 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.489941 | orchestrator | 2025-07-05 23:19:31.489949 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-07-05 23:19:31.489957 | orchestrator | Saturday 05 July 2025 23:16:27 +0000 (0:00:02.789) 0:05:46.308 ********* 2025-07-05 23:19:31.489971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.489980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.489992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.490001 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.490010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.490047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.490058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.490066 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.490080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-05 23:19:31.490088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.490097 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.490109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-05 23:19:31.490123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.490136 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.490149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-05 23:19:31.490164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.490176 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.490195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-05 23:19:31.490209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-05 23:19:31.490227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-05 23:19:31.490250 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.490263 | orchestrator | 2025-07-05 23:19:31.490276 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-07-05 23:19:31.490289 | orchestrator | Saturday 05 July 2025 23:16:28 +0000 (0:00:01.429) 0:05:47.737 ********* 2025-07-05 23:19:31.490303 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-05 23:19:31.490316 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-05 23:19:31.490329 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.490444 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-05 23:19:31.490457 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-05 23:19:31.490470 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-05 23:19:31.490483 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-05 23:19:31.490496 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.490510 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-05 23:19:31.490525 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-05 23:19:31.490538 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.490551 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-05 23:19:31.490560 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-05 23:19:31.490568 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.490576 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.490584 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-05 23:19:31.490592 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-05 23:19:31.490600 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.490607 | orchestrator | 2025-07-05 23:19:31.490662 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-07-05 23:19:31.490671 | orchestrator | Saturday 05 July 2025 23:16:29 +0000 (0:00:00.514) 0:05:48.252 ********* 2025-07-05 23:19:31.490680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-05 23:19:31.490866 | orchestrator | 2025-07-05 23:19:31.490879 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-05 23:19:31.490892 | orchestrator | Saturday 05 July 2025 23:16:32 +0000 (0:00:03.290) 0:05:51.542 ********* 2025-07-05 23:19:31.490904 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.490914 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.490926 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.490937 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.490949 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.490960 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.490969 | orchestrator | 2025-07-05 23:19:31.490976 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-05 23:19:31.490983 | orchestrator | Saturday 05 July 2025 23:16:32 +0000 (0:00:00.506) 0:05:52.049 ********* 2025-07-05 23:19:31.490989 | orchestrator | 2025-07-05 23:19:31.491000 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-05 23:19:31.491007 | orchestrator | Saturday 05 July 2025 23:16:33 +0000 (0:00:00.241) 0:05:52.290 ********* 2025-07-05 23:19:31.491014 | orchestrator | 2025-07-05 23:19:31.491021 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-05 23:19:31.491028 | orchestrator | Saturday 05 July 2025 23:16:33 +0000 (0:00:00.122) 0:05:52.412 ********* 2025-07-05 23:19:31.491034 | orchestrator | 2025-07-05 23:19:31.491041 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-05 23:19:31.491048 | orchestrator | Saturday 05 July 2025 23:16:33 +0000 (0:00:00.122) 0:05:52.535 ********* 2025-07-05 23:19:31.491056 | orchestrator | 2025-07-05 23:19:31.491067 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-05 23:19:31.491078 | orchestrator | Saturday 05 July 2025 23:16:33 +0000 (0:00:00.120) 0:05:52.656 ********* 2025-07-05 23:19:31.491090 | orchestrator | 2025-07-05 23:19:31.491102 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-05 23:19:31.491113 | orchestrator | Saturday 05 July 2025 23:16:33 +0000 (0:00:00.116) 0:05:52.773 ********* 2025-07-05 23:19:31.491125 | orchestrator | 2025-07-05 23:19:31.491133 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-07-05 23:19:31.491140 | orchestrator | Saturday 05 July 2025 23:16:33 +0000 (0:00:00.116) 0:05:52.889 ********* 2025-07-05 23:19:31.491147 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.491153 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:19:31.491160 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:19:31.491167 | orchestrator | 2025-07-05 23:19:31.491174 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-07-05 23:19:31.491181 | orchestrator | Saturday 05 July 2025 23:16:44 +0000 (0:00:10.321) 0:06:03.211 ********* 2025-07-05 23:19:31.491187 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.491194 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:19:31.491201 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:19:31.491208 | orchestrator | 2025-07-05 23:19:31.491215 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-07-05 23:19:31.491222 | orchestrator | Saturday 05 July 2025 23:16:57 +0000 (0:00:13.030) 0:06:16.241 ********* 2025-07-05 23:19:31.491234 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.491241 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.491247 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.491254 | orchestrator | 2025-07-05 23:19:31.491261 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-07-05 23:19:31.491269 | orchestrator | Saturday 05 July 2025 23:17:14 +0000 (0:00:17.124) 0:06:33.366 ********* 2025-07-05 23:19:31.491281 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.491293 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.491303 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.491314 | orchestrator | 2025-07-05 23:19:31.491326 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-07-05 23:19:31.491338 | orchestrator | Saturday 05 July 2025 23:17:52 +0000 (0:00:38.539) 0:07:11.905 ********* 2025-07-05 23:19:31.491345 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.491352 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.491358 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.491365 | orchestrator | 2025-07-05 23:19:31.491372 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-07-05 23:19:31.491379 | orchestrator | Saturday 05 July 2025 23:17:53 +0000 (0:00:01.144) 0:07:13.050 ********* 2025-07-05 23:19:31.491385 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.491392 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.491399 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.491405 | orchestrator | 2025-07-05 23:19:31.491412 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-07-05 23:19:31.491424 | orchestrator | Saturday 05 July 2025 23:17:54 +0000 (0:00:00.778) 0:07:13.828 ********* 2025-07-05 23:19:31.491431 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:19:31.491438 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:19:31.491444 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:19:31.491451 | orchestrator | 2025-07-05 23:19:31.491458 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-07-05 23:19:31.491465 | orchestrator | Saturday 05 July 2025 23:18:23 +0000 (0:00:29.270) 0:07:43.098 ********* 2025-07-05 23:19:31.491471 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.491478 | orchestrator | 2025-07-05 23:19:31.491485 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-07-05 23:19:31.491492 | orchestrator | Saturday 05 July 2025 23:18:24 +0000 (0:00:00.138) 0:07:43.237 ********* 2025-07-05 23:19:31.491498 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.491505 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.491512 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.491519 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.491525 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.491532 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-07-05 23:19:31.491540 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:19:31.491547 | orchestrator | 2025-07-05 23:19:31.491553 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-07-05 23:19:31.491560 | orchestrator | Saturday 05 July 2025 23:18:45 +0000 (0:00:21.126) 0:08:04.364 ********* 2025-07-05 23:19:31.491567 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.491573 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.491580 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.491587 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.491593 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.491600 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.491607 | orchestrator | 2025-07-05 23:19:31.491630 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-07-05 23:19:31.491637 | orchestrator | Saturday 05 July 2025 23:18:54 +0000 (0:00:08.863) 0:08:13.227 ********* 2025-07-05 23:19:31.491653 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.491664 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.491671 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.491678 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.491685 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.491691 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-07-05 23:19:31.491698 | orchestrator | 2025-07-05 23:19:31.491705 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-05 23:19:31.491712 | orchestrator | Saturday 05 July 2025 23:18:58 +0000 (0:00:04.118) 0:08:17.346 ********* 2025-07-05 23:19:31.491718 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:19:31.491725 | orchestrator | 2025-07-05 23:19:31.491732 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-05 23:19:31.491739 | orchestrator | Saturday 05 July 2025 23:19:10 +0000 (0:00:11.981) 0:08:29.328 ********* 2025-07-05 23:19:31.491746 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:19:31.491752 | orchestrator | 2025-07-05 23:19:31.491759 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-07-05 23:19:31.491766 | orchestrator | Saturday 05 July 2025 23:19:11 +0000 (0:00:01.222) 0:08:30.550 ********* 2025-07-05 23:19:31.491773 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.491779 | orchestrator | 2025-07-05 23:19:31.491786 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-07-05 23:19:31.491793 | orchestrator | Saturday 05 July 2025 23:19:12 +0000 (0:00:01.255) 0:08:31.806 ********* 2025-07-05 23:19:31.491800 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:19:31.491807 | orchestrator | 2025-07-05 23:19:31.491813 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-07-05 23:19:31.491820 | orchestrator | Saturday 05 July 2025 23:19:23 +0000 (0:00:10.500) 0:08:42.307 ********* 2025-07-05 23:19:31.491827 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:19:31.491834 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:19:31.491874 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:19:31.491882 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:31.491888 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:19:31.491895 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:19:31.491902 | orchestrator | 2025-07-05 23:19:31.491909 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-07-05 23:19:31.491916 | orchestrator | 2025-07-05 23:19:31.491923 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-07-05 23:19:31.491929 | orchestrator | Saturday 05 July 2025 23:19:24 +0000 (0:00:01.764) 0:08:44.071 ********* 2025-07-05 23:19:31.491936 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:31.491943 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:19:31.491950 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:19:31.491956 | orchestrator | 2025-07-05 23:19:31.491963 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-07-05 23:19:31.491970 | orchestrator | 2025-07-05 23:19:31.491976 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-07-05 23:19:31.491983 | orchestrator | Saturday 05 July 2025 23:19:26 +0000 (0:00:01.076) 0:08:45.147 ********* 2025-07-05 23:19:31.491990 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.491997 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.492003 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.492010 | orchestrator | 2025-07-05 23:19:31.492017 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-07-05 23:19:31.492024 | orchestrator | 2025-07-05 23:19:31.492031 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-07-05 23:19:31.492037 | orchestrator | Saturday 05 July 2025 23:19:26 +0000 (0:00:00.512) 0:08:45.660 ********* 2025-07-05 23:19:31.492044 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-07-05 23:19:31.492062 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-05 23:19:31.492074 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-05 23:19:31.492085 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-07-05 23:19:31.492097 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-07-05 23:19:31.492109 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-07-05 23:19:31.492121 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:19:31.492132 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-07-05 23:19:31.492144 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-05 23:19:31.492151 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-05 23:19:31.492157 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-07-05 23:19:31.492164 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-07-05 23:19:31.492175 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-07-05 23:19:31.492187 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:19:31.492198 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-07-05 23:19:31.492209 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-05 23:19:31.492219 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-05 23:19:31.492231 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-07-05 23:19:31.492242 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-07-05 23:19:31.492254 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-07-05 23:19:31.492265 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-07-05 23:19:31.492275 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-05 23:19:31.492282 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-05 23:19:31.492288 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-07-05 23:19:31.492295 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-07-05 23:19:31.492302 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-07-05 23:19:31.492314 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:19:31.492321 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-07-05 23:19:31.492328 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-05 23:19:31.492338 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-05 23:19:31.492349 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-07-05 23:19:31.492361 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-07-05 23:19:31.492373 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-07-05 23:19:31.492385 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.492392 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.492399 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-07-05 23:19:31.492406 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-05 23:19:31.492412 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-05 23:19:31.492419 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-07-05 23:19:31.492426 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-07-05 23:19:31.492433 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-07-05 23:19:31.492439 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.492446 | orchestrator | 2025-07-05 23:19:31.492453 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-07-05 23:19:31.492459 | orchestrator | 2025-07-05 23:19:31.492466 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-07-05 23:19:31.492473 | orchestrator | Saturday 05 July 2025 23:19:27 +0000 (0:00:01.225) 0:08:46.885 ********* 2025-07-05 23:19:31.492485 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-07-05 23:19:31.492492 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-07-05 23:19:31.492541 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.492554 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-07-05 23:19:31.492566 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-07-05 23:19:31.492576 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.492588 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-07-05 23:19:31.492595 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-07-05 23:19:31.492602 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.492608 | orchestrator | 2025-07-05 23:19:31.492633 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-07-05 23:19:31.492641 | orchestrator | 2025-07-05 23:19:31.492647 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-07-05 23:19:31.492654 | orchestrator | Saturday 05 July 2025 23:19:28 +0000 (0:00:00.715) 0:08:47.600 ********* 2025-07-05 23:19:31.492661 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.492715 | orchestrator | 2025-07-05 23:19:31.492724 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-07-05 23:19:31.492732 | orchestrator | 2025-07-05 23:19:31.492738 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-07-05 23:19:31.492746 | orchestrator | Saturday 05 July 2025 23:19:29 +0000 (0:00:00.653) 0:08:48.254 ********* 2025-07-05 23:19:31.492752 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:31.492759 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:31.492766 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:31.492773 | orchestrator | 2025-07-05 23:19:31.492779 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:19:31.492794 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:19:31.492802 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-07-05 23:19:31.492809 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-05 23:19:31.492816 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-05 23:19:31.492823 | orchestrator | testbed-node-3 : ok=43  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-05 23:19:31.492830 | orchestrator | testbed-node-4 : ok=37  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-05 23:19:31.492837 | orchestrator | testbed-node-5 : ok=37  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-05 23:19:31.492843 | orchestrator | 2025-07-05 23:19:31.492850 | orchestrator | 2025-07-05 23:19:31.492857 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:19:31.492864 | orchestrator | Saturday 05 July 2025 23:19:29 +0000 (0:00:00.446) 0:08:48.700 ********* 2025-07-05 23:19:31.492870 | orchestrator | =============================================================================== 2025-07-05 23:19:31.492877 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.54s 2025-07-05 23:19:31.492884 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.60s 2025-07-05 23:19:31.492890 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.27s 2025-07-05 23:19:31.492897 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.13s 2025-07-05 23:19:31.492915 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.47s 2025-07-05 23:19:31.492922 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.22s 2025-07-05 23:19:31.492929 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.58s 2025-07-05 23:19:31.492936 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 17.12s 2025-07-05 23:19:31.492942 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.00s 2025-07-05 23:19:31.492949 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.03s 2025-07-05 23:19:31.492956 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.98s 2025-07-05 23:19:31.492963 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.96s 2025-07-05 23:19:31.492969 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.85s 2025-07-05 23:19:31.492976 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.84s 2025-07-05 23:19:31.492983 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.27s 2025-07-05 23:19:31.492990 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.50s 2025-07-05 23:19:31.492997 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 10.32s 2025-07-05 23:19:31.493003 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.01s 2025-07-05 23:19:31.493010 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.86s 2025-07-05 23:19:31.493017 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.23s 2025-07-05 23:19:31.493024 | orchestrator | 2025-07-05 23:19:31 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state STARTED 2025-07-05 23:19:31.493031 | orchestrator | 2025-07-05 23:19:31 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:31.493038 | orchestrator | 2025-07-05 23:19:31 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:34.533673 | orchestrator | 2025-07-05 23:19:34.533773 | orchestrator | 2025-07-05 23:19:34.533788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:19:34.533799 | orchestrator | 2025-07-05 23:19:34.533810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:19:34.533941 | orchestrator | Saturday 05 July 2025 23:17:14 +0000 (0:00:00.261) 0:00:00.261 ********* 2025-07-05 23:19:34.533954 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:34.533966 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:19:34.533976 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:19:34.533986 | orchestrator | 2025-07-05 23:19:34.533996 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:19:34.534006 | orchestrator | Saturday 05 July 2025 23:17:15 +0000 (0:00:00.294) 0:00:00.556 ********* 2025-07-05 23:19:34.534062 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-07-05 23:19:34.534075 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-07-05 23:19:34.534085 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-07-05 23:19:34.534095 | orchestrator | 2025-07-05 23:19:34.534104 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-07-05 23:19:34.534114 | orchestrator | 2025-07-05 23:19:34.534124 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-05 23:19:34.534134 | orchestrator | Saturday 05 July 2025 23:17:15 +0000 (0:00:00.550) 0:00:01.106 ********* 2025-07-05 23:19:34.534144 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:19:34.534155 | orchestrator | 2025-07-05 23:19:34.534165 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-07-05 23:19:34.534175 | orchestrator | Saturday 05 July 2025 23:17:16 +0000 (0:00:00.646) 0:00:01.752 ********* 2025-07-05 23:19:34.534216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.535333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.535372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.535383 | orchestrator | 2025-07-05 23:19:34.535395 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-07-05 23:19:34.535406 | orchestrator | Saturday 05 July 2025 23:17:17 +0000 (0:00:01.075) 0:00:02.828 ********* 2025-07-05 23:19:34.535416 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-07-05 23:19:34.535427 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-07-05 23:19:34.535444 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 23:19:34.535460 | orchestrator | 2025-07-05 23:19:34.535476 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-05 23:19:34.535492 | orchestrator | Saturday 05 July 2025 23:17:18 +0000 (0:00:00.963) 0:00:03.792 ********* 2025-07-05 23:19:34.535509 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:19:34.535526 | orchestrator | 2025-07-05 23:19:34.535540 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-07-05 23:19:34.535550 | orchestrator | Saturday 05 July 2025 23:17:18 +0000 (0:00:00.647) 0:00:04.439 ********* 2025-07-05 23:19:34.535657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.535681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.535718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.535730 | orchestrator | 2025-07-05 23:19:34.535740 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-07-05 23:19:34.535750 | orchestrator | Saturday 05 July 2025 23:17:20 +0000 (0:00:01.455) 0:00:05.895 ********* 2025-07-05 23:19:34.535766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-05 23:19:34.535777 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:34.535787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-05 23:19:34.535797 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:34.535846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-05 23:19:34.535858 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:34.535868 | orchestrator | 2025-07-05 23:19:34.535878 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-07-05 23:19:34.535888 | orchestrator | Saturday 05 July 2025 23:17:20 +0000 (0:00:00.317) 0:00:06.212 ********* 2025-07-05 23:19:34.535898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-05 23:19:34.535916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-05 23:19:34.535927 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:34.535937 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:34.535949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-05 23:19:34.535961 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:34.535972 | orchestrator | 2025-07-05 23:19:34.535984 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-07-05 23:19:34.535999 | orchestrator | Saturday 05 July 2025 23:17:21 +0000 (0:00:00.775) 0:00:06.988 ********* 2025-07-05 23:19:34.536011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.536023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.536065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.536085 | orchestrator | 2025-07-05 23:19:34.536096 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-07-05 23:19:34.536108 | orchestrator | Saturday 05 July 2025 23:17:22 +0000 (0:00:01.294) 0:00:08.282 ********* 2025-07-05 23:19:34.536119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.536130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.536152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.536164 | orchestrator | 2025-07-05 23:19:34.536174 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-07-05 23:19:34.536185 | orchestrator | Saturday 05 July 2025 23:17:24 +0000 (0:00:01.518) 0:00:09.801 ********* 2025-07-05 23:19:34.536197 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:34.536207 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:34.536218 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:34.536229 | orchestrator | 2025-07-05 23:19:34.536240 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-07-05 23:19:34.536252 | orchestrator | Saturday 05 July 2025 23:17:24 +0000 (0:00:00.642) 0:00:10.443 ********* 2025-07-05 23:19:34.536263 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-05 23:19:34.536275 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-05 23:19:34.536286 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-05 23:19:34.536297 | orchestrator | 2025-07-05 23:19:34.536308 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-07-05 23:19:34.536324 | orchestrator | Saturday 05 July 2025 23:17:26 +0000 (0:00:01.320) 0:00:11.763 ********* 2025-07-05 23:19:34.536334 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-05 23:19:34.536344 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-05 23:19:34.536354 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-05 23:19:34.536364 | orchestrator | 2025-07-05 23:19:34.536374 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-07-05 23:19:34.536384 | orchestrator | Saturday 05 July 2025 23:17:27 +0000 (0:00:01.241) 0:00:13.004 ********* 2025-07-05 23:19:34.536423 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-05 23:19:34.536435 | orchestrator | 2025-07-05 23:19:34.536445 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-07-05 23:19:34.536455 | orchestrator | Saturday 05 July 2025 23:17:28 +0000 (0:00:00.717) 0:00:13.721 ********* 2025-07-05 23:19:34.536465 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-07-05 23:19:34.536474 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-07-05 23:19:34.536484 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:34.536494 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:19:34.536504 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:19:34.536514 | orchestrator | 2025-07-05 23:19:34.536523 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-07-05 23:19:34.536533 | orchestrator | Saturday 05 July 2025 23:17:29 +0000 (0:00:00.749) 0:00:14.470 ********* 2025-07-05 23:19:34.536543 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:34.536553 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:34.536562 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:34.536572 | orchestrator | 2025-07-05 23:19:34.536582 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-07-05 23:19:34.536591 | orchestrator | Saturday 05 July 2025 23:17:29 +0000 (0:00:00.507) 0:00:14.978 ********* 2025-07-05 23:19:34.536602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1082430, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.068454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1082430, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.068454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1082430, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.068454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1082406, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.062454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1082406, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.062454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1082406, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.062454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1082390, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0614538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1082390, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0614538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1082390, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0614538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1082421, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.065454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1082421, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.065454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1082421, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.065454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1082368, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0584538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1082368, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0584538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1082368, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0584538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1082398, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0614538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1082398, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0614538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1082398, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0614538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1082416, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0644538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1082416, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0644538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.536993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1082416, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0644538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1082362, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0564537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1082362, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0564537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1082362, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0564537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1082332, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0534537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1082332, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0534537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1082332, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0534537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1082372, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0584538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1082372, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0584538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1082372, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0584538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1082348, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0554538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1082348, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0554538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1082348, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0554538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1082410, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.063454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1082410, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.063454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1082410, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.063454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1082377, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0594537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1082377, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0594537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1082377, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0594537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1082425, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.065454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1082425, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.065454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1082425, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.065454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1082358, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0564537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1082358, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0564537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1082358, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0564537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1082402, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.062454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1082402, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.062454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1082402, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.062454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1082335, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0544536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1082335, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0544536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1082335, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0544536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1082351, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0564537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1082351, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0564537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1082351, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0564537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1082384, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.060454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1082384, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.060454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1082384, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.060454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1082517, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0874543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1082517, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0874543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1082517, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0874543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1082504, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0794542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1082504, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0794542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1082504, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0794542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1082445, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.069454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1082445, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.069454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1082445, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.069454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1082559, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0944545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1082559, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0944545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1082559, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0944545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1082450, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.069454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1082450, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.069454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1082450, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.069454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1082552, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0914543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1082552, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0914543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1082552, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0914543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1082569, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0974545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1082569, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0974545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1082569, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0974545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1082543, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0884542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1082543, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0884542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1082543, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0884542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1082548, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0904543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1082548, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0904543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1082548, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0904543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1082454, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0704541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1082454, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0704541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1082454, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0704541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1082507, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.080454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1082507, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.080454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.537997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1082507, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.080454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1082583, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0974545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1082583, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0974545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1082583, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0974545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1082555, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0914543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1082555, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0914543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1082555, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0914543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1082466, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0734541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1082466, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0734541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1082466, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0734541, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1082461, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.071454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1082461, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.071454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1082461, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.071454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1082474, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.074454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1082474, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.074454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1082474, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.074454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1082479, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0794542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1082479, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0794542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1082479, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0794542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1082510, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0814543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1082510, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0814543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1082510, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0814543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1082546, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0884542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1082546, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0884542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1082546, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0884542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1082514, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0814543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1082514, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0814543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1082514, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0814543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1082588, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0984545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1082588, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0984545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1082588, 'dev': 102, 'nlink': 1, 'atime': 1751673736.0, 'mtime': 1751673736.0, 'ctime': 1751754431.0984545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-05 23:19:34.538450 | orchestrator | 2025-07-05 23:19:34.538460 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-07-05 23:19:34.538470 | orchestrator | Saturday 05 July 2025 23:18:07 +0000 (0:00:37.793) 0:00:52.772 ********* 2025-07-05 23:19:34.538480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.538495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.538506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-05 23:19:34.538521 | orchestrator | 2025-07-05 23:19:34.538531 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-07-05 23:19:34.538541 | orchestrator | Saturday 05 July 2025 23:18:08 +0000 (0:00:01.028) 0:00:53.801 ********* 2025-07-05 23:19:34.538551 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:34.538561 | orchestrator | 2025-07-05 23:19:34.538571 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-07-05 23:19:34.538581 | orchestrator | Saturday 05 July 2025 23:18:10 +0000 (0:00:02.368) 0:00:56.169 ********* 2025-07-05 23:19:34.538591 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:34.538601 | orchestrator | 2025-07-05 23:19:34.538610 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-05 23:19:34.538678 | orchestrator | Saturday 05 July 2025 23:18:12 +0000 (0:00:02.079) 0:00:58.249 ********* 2025-07-05 23:19:34.538688 | orchestrator | 2025-07-05 23:19:34.538698 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-05 23:19:34.538714 | orchestrator | Saturday 05 July 2025 23:18:12 +0000 (0:00:00.216) 0:00:58.465 ********* 2025-07-05 23:19:34.538724 | orchestrator | 2025-07-05 23:19:34.538734 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-05 23:19:34.538744 | orchestrator | Saturday 05 July 2025 23:18:13 +0000 (0:00:00.063) 0:00:58.529 ********* 2025-07-05 23:19:34.538754 | orchestrator | 2025-07-05 23:19:34.538764 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-07-05 23:19:34.538774 | orchestrator | Saturday 05 July 2025 23:18:13 +0000 (0:00:00.064) 0:00:58.594 ********* 2025-07-05 23:19:34.538784 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:34.538794 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:34.538804 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:19:34.538814 | orchestrator | 2025-07-05 23:19:34.538824 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-07-05 23:19:34.538834 | orchestrator | Saturday 05 July 2025 23:18:14 +0000 (0:00:01.718) 0:01:00.312 ********* 2025-07-05 23:19:34.538844 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:34.538853 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:34.538863 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-07-05 23:19:34.538873 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-07-05 23:19:34.538883 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-07-05 23:19:34.538893 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:34.538903 | orchestrator | 2025-07-05 23:19:34.538913 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-07-05 23:19:34.538923 | orchestrator | Saturday 05 July 2025 23:18:53 +0000 (0:00:38.743) 0:01:39.056 ********* 2025-07-05 23:19:34.538933 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:34.538943 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:19:34.538953 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:19:34.538962 | orchestrator | 2025-07-05 23:19:34.538972 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-07-05 23:19:34.538982 | orchestrator | Saturday 05 July 2025 23:19:27 +0000 (0:00:33.905) 0:02:12.961 ********* 2025-07-05 23:19:34.538992 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:19:34.539001 | orchestrator | 2025-07-05 23:19:34.539011 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-07-05 23:19:34.539021 | orchestrator | Saturday 05 July 2025 23:19:29 +0000 (0:00:02.448) 0:02:15.410 ********* 2025-07-05 23:19:34.539031 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:34.539041 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:19:34.539051 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:19:34.539067 | orchestrator | 2025-07-05 23:19:34.539077 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-07-05 23:19:34.539087 | orchestrator | Saturday 05 July 2025 23:19:30 +0000 (0:00:00.313) 0:02:15.724 ********* 2025-07-05 23:19:34.539098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-07-05 23:19:34.539116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-07-05 23:19:34.539127 | orchestrator | 2025-07-05 23:19:34.539137 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-07-05 23:19:34.539147 | orchestrator | Saturday 05 July 2025 23:19:32 +0000 (0:00:02.387) 0:02:18.111 ********* 2025-07-05 23:19:34.539157 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:19:34.539167 | orchestrator | 2025-07-05 23:19:34.539176 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:19:34.539187 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-05 23:19:34.539197 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-05 23:19:34.539208 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-05 23:19:34.539216 | orchestrator | 2025-07-05 23:19:34.539224 | orchestrator | 2025-07-05 23:19:34.539232 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:19:34.539240 | orchestrator | Saturday 05 July 2025 23:19:32 +0000 (0:00:00.268) 0:02:18.380 ********* 2025-07-05 23:19:34.539248 | orchestrator | =============================================================================== 2025-07-05 23:19:34.539256 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.74s 2025-07-05 23:19:34.539264 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.79s 2025-07-05 23:19:34.539272 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.91s 2025-07-05 23:19:34.539280 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.45s 2025-07-05 23:19:34.539288 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.39s 2025-07-05 23:19:34.539300 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.37s 2025-07-05 23:19:34.539308 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.08s 2025-07-05 23:19:34.539316 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.72s 2025-07-05 23:19:34.539324 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.52s 2025-07-05 23:19:34.539332 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.46s 2025-07-05 23:19:34.539340 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.32s 2025-07-05 23:19:34.539348 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.29s 2025-07-05 23:19:34.539356 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.24s 2025-07-05 23:19:34.539364 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.08s 2025-07-05 23:19:34.539372 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2025-07-05 23:19:34.539380 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.96s 2025-07-05 23:19:34.539395 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.78s 2025-07-05 23:19:34.539403 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2025-07-05 23:19:34.539411 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.72s 2025-07-05 23:19:34.539419 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.65s 2025-07-05 23:19:34.539427 | orchestrator | 2025-07-05 23:19:34 | INFO  | Task a2d1983d-d05d-4d66-a517-3b1a20d277e2 is in state SUCCESS 2025-07-05 23:19:34.539435 | orchestrator | 2025-07-05 23:19:34 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:34.539443 | orchestrator | 2025-07-05 23:19:34 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:37.576483 | orchestrator | 2025-07-05 23:19:37 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:37.576588 | orchestrator | 2025-07-05 23:19:37 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:40.621098 | orchestrator | 2025-07-05 23:19:40 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:40.621197 | orchestrator | 2025-07-05 23:19:40 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:43.664407 | orchestrator | 2025-07-05 23:19:43 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:43.664497 | orchestrator | 2025-07-05 23:19:43 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:46.705124 | orchestrator | 2025-07-05 23:19:46 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:46.705228 | orchestrator | 2025-07-05 23:19:46 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:49.745292 | orchestrator | 2025-07-05 23:19:49 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:49.745414 | orchestrator | 2025-07-05 23:19:49 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:52.789456 | orchestrator | 2025-07-05 23:19:52 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:52.789582 | orchestrator | 2025-07-05 23:19:52 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:55.829057 | orchestrator | 2025-07-05 23:19:55 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:55.829162 | orchestrator | 2025-07-05 23:19:55 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:19:58.871241 | orchestrator | 2025-07-05 23:19:58 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:19:58.871344 | orchestrator | 2025-07-05 23:19:58 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:01.916997 | orchestrator | 2025-07-05 23:20:01 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:01.917106 | orchestrator | 2025-07-05 23:20:01 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:04.960983 | orchestrator | 2025-07-05 23:20:04 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:04.961090 | orchestrator | 2025-07-05 23:20:04 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:08.006375 | orchestrator | 2025-07-05 23:20:08 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:08.006468 | orchestrator | 2025-07-05 23:20:08 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:11.054681 | orchestrator | 2025-07-05 23:20:11 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:11.054795 | orchestrator | 2025-07-05 23:20:11 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:14.097673 | orchestrator | 2025-07-05 23:20:14 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:14.097747 | orchestrator | 2025-07-05 23:20:14 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:17.151896 | orchestrator | 2025-07-05 23:20:17 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:17.152001 | orchestrator | 2025-07-05 23:20:17 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:20.201433 | orchestrator | 2025-07-05 23:20:20 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:20.201543 | orchestrator | 2025-07-05 23:20:20 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:23.249838 | orchestrator | 2025-07-05 23:20:23 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:23.249939 | orchestrator | 2025-07-05 23:20:23 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:26.290882 | orchestrator | 2025-07-05 23:20:26 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:26.290987 | orchestrator | 2025-07-05 23:20:26 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:29.332750 | orchestrator | 2025-07-05 23:20:29 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:29.332863 | orchestrator | 2025-07-05 23:20:29 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:32.394370 | orchestrator | 2025-07-05 23:20:32 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:32.394474 | orchestrator | 2025-07-05 23:20:32 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:35.445318 | orchestrator | 2025-07-05 23:20:35 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:35.445394 | orchestrator | 2025-07-05 23:20:35 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:38.486305 | orchestrator | 2025-07-05 23:20:38 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:38.486408 | orchestrator | 2025-07-05 23:20:38 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:41.530401 | orchestrator | 2025-07-05 23:20:41 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:41.530492 | orchestrator | 2025-07-05 23:20:41 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:44.580564 | orchestrator | 2025-07-05 23:20:44 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:44.580695 | orchestrator | 2025-07-05 23:20:44 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:47.613279 | orchestrator | 2025-07-05 23:20:47 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:47.613404 | orchestrator | 2025-07-05 23:20:47 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:50.656743 | orchestrator | 2025-07-05 23:20:50 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:50.657841 | orchestrator | 2025-07-05 23:20:50 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:53.707496 | orchestrator | 2025-07-05 23:20:53 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:53.707635 | orchestrator | 2025-07-05 23:20:53 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:56.750651 | orchestrator | 2025-07-05 23:20:56 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:56.750764 | orchestrator | 2025-07-05 23:20:56 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:20:59.793975 | orchestrator | 2025-07-05 23:20:59 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:20:59.794174 | orchestrator | 2025-07-05 23:20:59 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:02.840768 | orchestrator | 2025-07-05 23:21:02 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:02.840902 | orchestrator | 2025-07-05 23:21:02 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:05.882153 | orchestrator | 2025-07-05 23:21:05 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:05.882277 | orchestrator | 2025-07-05 23:21:05 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:08.923221 | orchestrator | 2025-07-05 23:21:08 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:08.923327 | orchestrator | 2025-07-05 23:21:08 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:11.964527 | orchestrator | 2025-07-05 23:21:11 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:11.964662 | orchestrator | 2025-07-05 23:21:11 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:15.011700 | orchestrator | 2025-07-05 23:21:15 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:15.011809 | orchestrator | 2025-07-05 23:21:15 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:18.054680 | orchestrator | 2025-07-05 23:21:18 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:18.055752 | orchestrator | 2025-07-05 23:21:18 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:21.094001 | orchestrator | 2025-07-05 23:21:21 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:21.094165 | orchestrator | 2025-07-05 23:21:21 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:24.141822 | orchestrator | 2025-07-05 23:21:24 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:24.141925 | orchestrator | 2025-07-05 23:21:24 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:27.180751 | orchestrator | 2025-07-05 23:21:27 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:27.180891 | orchestrator | 2025-07-05 23:21:27 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:30.230078 | orchestrator | 2025-07-05 23:21:30 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:30.230191 | orchestrator | 2025-07-05 23:21:30 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:33.277486 | orchestrator | 2025-07-05 23:21:33 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:33.277670 | orchestrator | 2025-07-05 23:21:33 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:36.314287 | orchestrator | 2025-07-05 23:21:36 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:36.315357 | orchestrator | 2025-07-05 23:21:36 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:39.358531 | orchestrator | 2025-07-05 23:21:39 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:39.358709 | orchestrator | 2025-07-05 23:21:39 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:42.402705 | orchestrator | 2025-07-05 23:21:42 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:42.402872 | orchestrator | 2025-07-05 23:21:42 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:45.450680 | orchestrator | 2025-07-05 23:21:45 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:45.450779 | orchestrator | 2025-07-05 23:21:45 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:48.495976 | orchestrator | 2025-07-05 23:21:48 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:48.496067 | orchestrator | 2025-07-05 23:21:48 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:51.540997 | orchestrator | 2025-07-05 23:21:51 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:51.541095 | orchestrator | 2025-07-05 23:21:51 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:54.580080 | orchestrator | 2025-07-05 23:21:54 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:54.580189 | orchestrator | 2025-07-05 23:21:54 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:21:57.623259 | orchestrator | 2025-07-05 23:21:57 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:21:57.623366 | orchestrator | 2025-07-05 23:21:57 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:22:00.666443 | orchestrator | 2025-07-05 23:22:00 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:22:00.666545 | orchestrator | 2025-07-05 23:22:00 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:22:03.705675 | orchestrator | 2025-07-05 23:22:03 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:22:03.705779 | orchestrator | 2025-07-05 23:22:03 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:22:06.748442 | orchestrator | 2025-07-05 23:22:06 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state STARTED 2025-07-05 23:22:06.748630 | orchestrator | 2025-07-05 23:22:06 | INFO  | Wait 1 second(s) until the next check 2025-07-05 23:22:09.793069 | orchestrator | 2025-07-05 23:22:09 | INFO  | Task 21f39e13-5411-45b3-8eaa-f932af5fe5c3 is in state SUCCESS 2025-07-05 23:22:09.794231 | orchestrator | 2025-07-05 23:22:09.794275 | orchestrator | 2025-07-05 23:22:09.794288 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:22:09.794301 | orchestrator | 2025-07-05 23:22:09.794313 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:22:09.794325 | orchestrator | Saturday 05 July 2025 23:17:27 +0000 (0:00:00.262) 0:00:00.262 ********* 2025-07-05 23:22:09.794389 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:22:09.794403 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:22:09.794501 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:22:09.794513 | orchestrator | 2025-07-05 23:22:09.794525 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:22:09.794537 | orchestrator | Saturday 05 July 2025 23:17:28 +0000 (0:00:00.290) 0:00:00.552 ********* 2025-07-05 23:22:09.794653 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-07-05 23:22:09.794670 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-07-05 23:22:09.794681 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-07-05 23:22:09.794692 | orchestrator | 2025-07-05 23:22:09.795218 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-07-05 23:22:09.795237 | orchestrator | 2025-07-05 23:22:09.795248 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-05 23:22:09.795260 | orchestrator | Saturday 05 July 2025 23:17:28 +0000 (0:00:00.419) 0:00:00.971 ********* 2025-07-05 23:22:09.795271 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:22:09.795308 | orchestrator | 2025-07-05 23:22:09.795320 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-07-05 23:22:09.795331 | orchestrator | Saturday 05 July 2025 23:17:29 +0000 (0:00:00.532) 0:00:01.504 ********* 2025-07-05 23:22:09.795343 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-07-05 23:22:09.795354 | orchestrator | 2025-07-05 23:22:09.795366 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-07-05 23:22:09.795376 | orchestrator | Saturday 05 July 2025 23:17:32 +0000 (0:00:03.564) 0:00:05.069 ********* 2025-07-05 23:22:09.795388 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-07-05 23:22:09.795401 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-07-05 23:22:09.795412 | orchestrator | 2025-07-05 23:22:09.795423 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-07-05 23:22:09.795434 | orchestrator | Saturday 05 July 2025 23:17:39 +0000 (0:00:06.676) 0:00:11.745 ********* 2025-07-05 23:22:09.795445 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-05 23:22:09.795457 | orchestrator | 2025-07-05 23:22:09.795468 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-07-05 23:22:09.795479 | orchestrator | Saturday 05 July 2025 23:17:42 +0000 (0:00:03.611) 0:00:15.356 ********* 2025-07-05 23:22:09.795490 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-05 23:22:09.795501 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-05 23:22:09.795513 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-05 23:22:09.795524 | orchestrator | 2025-07-05 23:22:09.795536 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-07-05 23:22:09.795583 | orchestrator | Saturday 05 July 2025 23:17:51 +0000 (0:00:08.493) 0:00:23.850 ********* 2025-07-05 23:22:09.795596 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-05 23:22:09.795607 | orchestrator | 2025-07-05 23:22:09.795618 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-07-05 23:22:09.795629 | orchestrator | Saturday 05 July 2025 23:17:54 +0000 (0:00:03.547) 0:00:27.397 ********* 2025-07-05 23:22:09.795640 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-05 23:22:09.795651 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-05 23:22:09.795662 | orchestrator | 2025-07-05 23:22:09.795673 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-07-05 23:22:09.795684 | orchestrator | Saturday 05 July 2025 23:18:02 +0000 (0:00:07.564) 0:00:34.962 ********* 2025-07-05 23:22:09.795696 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-07-05 23:22:09.795776 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-07-05 23:22:09.795788 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-07-05 23:22:09.795799 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-07-05 23:22:09.795810 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-07-05 23:22:09.795821 | orchestrator | 2025-07-05 23:22:09.795832 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-05 23:22:09.795846 | orchestrator | Saturday 05 July 2025 23:18:18 +0000 (0:00:15.632) 0:00:50.594 ********* 2025-07-05 23:22:09.795858 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:22:09.795871 | orchestrator | 2025-07-05 23:22:09.795883 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-07-05 23:22:09.795896 | orchestrator | Saturday 05 July 2025 23:18:18 +0000 (0:00:00.577) 0:00:51.171 ********* 2025-07-05 23:22:09.795908 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.795921 | orchestrator | 2025-07-05 23:22:09.795934 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-07-05 23:22:09.795957 | orchestrator | Saturday 05 July 2025 23:18:24 +0000 (0:00:05.439) 0:00:56.611 ********* 2025-07-05 23:22:09.795970 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.795983 | orchestrator | 2025-07-05 23:22:09.795996 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-07-05 23:22:09.796058 | orchestrator | Saturday 05 July 2025 23:18:28 +0000 (0:00:04.466) 0:01:01.077 ********* 2025-07-05 23:22:09.796072 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:22:09.796083 | orchestrator | 2025-07-05 23:22:09.796094 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-07-05 23:22:09.796106 | orchestrator | Saturday 05 July 2025 23:18:31 +0000 (0:00:03.173) 0:01:04.251 ********* 2025-07-05 23:22:09.796116 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-07-05 23:22:09.796127 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-07-05 23:22:09.796139 | orchestrator | 2025-07-05 23:22:09.796149 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-07-05 23:22:09.796160 | orchestrator | Saturday 05 July 2025 23:18:42 +0000 (0:00:10.872) 0:01:15.123 ********* 2025-07-05 23:22:09.796172 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-07-05 23:22:09.796184 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-07-05 23:22:09.796197 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-07-05 23:22:09.796209 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-07-05 23:22:09.796220 | orchestrator | 2025-07-05 23:22:09.796232 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-07-05 23:22:09.796242 | orchestrator | Saturday 05 July 2025 23:19:00 +0000 (0:00:17.548) 0:01:32.672 ********* 2025-07-05 23:22:09.796253 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.796264 | orchestrator | 2025-07-05 23:22:09.796275 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-07-05 23:22:09.796286 | orchestrator | Saturday 05 July 2025 23:19:05 +0000 (0:00:04.867) 0:01:37.539 ********* 2025-07-05 23:22:09.796297 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.796308 | orchestrator | 2025-07-05 23:22:09.796319 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-07-05 23:22:09.796330 | orchestrator | Saturday 05 July 2025 23:19:10 +0000 (0:00:05.928) 0:01:43.468 ********* 2025-07-05 23:22:09.796341 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:22:09.796352 | orchestrator | 2025-07-05 23:22:09.796363 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-07-05 23:22:09.796374 | orchestrator | Saturday 05 July 2025 23:19:11 +0000 (0:00:00.225) 0:01:43.694 ********* 2025-07-05 23:22:09.796385 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.796396 | orchestrator | 2025-07-05 23:22:09.796407 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-05 23:22:09.796418 | orchestrator | Saturday 05 July 2025 23:19:15 +0000 (0:00:04.728) 0:01:48.423 ********* 2025-07-05 23:22:09.796429 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:22:09.796440 | orchestrator | 2025-07-05 23:22:09.796451 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-07-05 23:22:09.796462 | orchestrator | Saturday 05 July 2025 23:19:17 +0000 (0:00:01.168) 0:01:49.591 ********* 2025-07-05 23:22:09.796473 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.796492 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.796503 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.796514 | orchestrator | 2025-07-05 23:22:09.796526 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-07-05 23:22:09.796568 | orchestrator | Saturday 05 July 2025 23:19:22 +0000 (0:00:05.538) 0:01:55.130 ********* 2025-07-05 23:22:09.796581 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.796592 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.796603 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.796614 | orchestrator | 2025-07-05 23:22:09.796626 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-07-05 23:22:09.796637 | orchestrator | Saturday 05 July 2025 23:19:27 +0000 (0:00:04.456) 0:01:59.587 ********* 2025-07-05 23:22:09.796648 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.796659 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.796670 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.796681 | orchestrator | 2025-07-05 23:22:09.796692 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-07-05 23:22:09.796703 | orchestrator | Saturday 05 July 2025 23:19:27 +0000 (0:00:00.782) 0:02:00.369 ********* 2025-07-05 23:22:09.796714 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:22:09.796726 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:22:09.796737 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:22:09.796748 | orchestrator | 2025-07-05 23:22:09.796759 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-07-05 23:22:09.796770 | orchestrator | Saturday 05 July 2025 23:19:29 +0000 (0:00:02.081) 0:02:02.451 ********* 2025-07-05 23:22:09.796781 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.796792 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.796803 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.796814 | orchestrator | 2025-07-05 23:22:09.796825 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-07-05 23:22:09.796837 | orchestrator | Saturday 05 July 2025 23:19:31 +0000 (0:00:01.299) 0:02:03.751 ********* 2025-07-05 23:22:09.796848 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.796859 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.796870 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.796881 | orchestrator | 2025-07-05 23:22:09.796892 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-07-05 23:22:09.796903 | orchestrator | Saturday 05 July 2025 23:19:32 +0000 (0:00:01.168) 0:02:04.919 ********* 2025-07-05 23:22:09.796915 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.796926 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.796937 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.796948 | orchestrator | 2025-07-05 23:22:09.796993 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-07-05 23:22:09.797006 | orchestrator | Saturday 05 July 2025 23:19:34 +0000 (0:00:02.004) 0:02:06.923 ********* 2025-07-05 23:22:09.797017 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.797028 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.797039 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.797050 | orchestrator | 2025-07-05 23:22:09.797062 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-07-05 23:22:09.797073 | orchestrator | Saturday 05 July 2025 23:19:36 +0000 (0:00:01.735) 0:02:08.659 ********* 2025-07-05 23:22:09.797084 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:22:09.797095 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:22:09.797106 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:22:09.797117 | orchestrator | 2025-07-05 23:22:09.797129 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-07-05 23:22:09.797140 | orchestrator | Saturday 05 July 2025 23:19:36 +0000 (0:00:00.647) 0:02:09.307 ********* 2025-07-05 23:22:09.797151 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:22:09.797162 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:22:09.797173 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:22:09.797184 | orchestrator | 2025-07-05 23:22:09.797195 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-05 23:22:09.797207 | orchestrator | Saturday 05 July 2025 23:19:39 +0000 (0:00:02.680) 0:02:11.988 ********* 2025-07-05 23:22:09.797225 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:22:09.797237 | orchestrator | 2025-07-05 23:22:09.797248 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-07-05 23:22:09.797259 | orchestrator | Saturday 05 July 2025 23:19:40 +0000 (0:00:00.688) 0:02:12.676 ********* 2025-07-05 23:22:09.797270 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:22:09.797281 | orchestrator | 2025-07-05 23:22:09.797292 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-07-05 23:22:09.797304 | orchestrator | Saturday 05 July 2025 23:19:43 +0000 (0:00:03.693) 0:02:16.370 ********* 2025-07-05 23:22:09.797315 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:22:09.797326 | orchestrator | 2025-07-05 23:22:09.797337 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-07-05 23:22:09.797348 | orchestrator | Saturday 05 July 2025 23:19:47 +0000 (0:00:03.152) 0:02:19.522 ********* 2025-07-05 23:22:09.797359 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-07-05 23:22:09.797370 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-07-05 23:22:09.797381 | orchestrator | 2025-07-05 23:22:09.797393 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-07-05 23:22:09.797403 | orchestrator | Saturday 05 July 2025 23:19:54 +0000 (0:00:07.087) 0:02:26.610 ********* 2025-07-05 23:22:09.797414 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:22:09.797426 | orchestrator | 2025-07-05 23:22:09.797437 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-07-05 23:22:09.797448 | orchestrator | Saturday 05 July 2025 23:19:57 +0000 (0:00:03.342) 0:02:29.953 ********* 2025-07-05 23:22:09.797459 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:22:09.797470 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:22:09.797481 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:22:09.797492 | orchestrator | 2025-07-05 23:22:09.797503 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-07-05 23:22:09.797515 | orchestrator | Saturday 05 July 2025 23:19:57 +0000 (0:00:00.318) 0:02:30.271 ********* 2025-07-05 23:22:09.797535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.797660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.797685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.797699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.797712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.797729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.797742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.797755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.797798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.797817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.797829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.797839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.797854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.797866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.797876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.797892 | orchestrator | 2025-07-05 23:22:09.797902 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-07-05 23:22:09.797913 | orchestrator | Saturday 05 July 2025 23:20:00 +0000 (0:00:02.633) 0:02:32.905 ********* 2025-07-05 23:22:09.797923 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:22:09.797933 | orchestrator | 2025-07-05 23:22:09.797969 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-07-05 23:22:09.797981 | orchestrator | Saturday 05 July 2025 23:20:00 +0000 (0:00:00.349) 0:02:33.255 ********* 2025-07-05 23:22:09.797990 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:22:09.798000 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:22:09.798010 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:22:09.798051 | orchestrator | 2025-07-05 23:22:09.798062 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-07-05 23:22:09.798072 | orchestrator | Saturday 05 July 2025 23:20:01 +0000 (0:00:00.312) 0:02:33.568 ********* 2025-07-05 23:22:09.798083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:22:09.798101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:22:09.798126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.798143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.798160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:22:09.798186 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:22:09.798245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:22:09.798264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:22:09.798281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.798297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.798328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:22:09.798346 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:22:09.798363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:22:09.798437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:22:09.798458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.798476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.798490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:22:09.798500 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:22:09.798510 | orchestrator | 2025-07-05 23:22:09.798520 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-05 23:22:09.798531 | orchestrator | Saturday 05 July 2025 23:20:01 +0000 (0:00:00.679) 0:02:34.247 ********* 2025-07-05 23:22:09.798577 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:22:09.798590 | orchestrator | 2025-07-05 23:22:09.798600 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-07-05 23:22:09.798610 | orchestrator | Saturday 05 July 2025 23:20:02 +0000 (0:00:00.521) 0:02:34.769 ********* 2025-07-05 23:22:09.798620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.798673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.798685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.798696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.798707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.798723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.798739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.798750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.798767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.798777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.798788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.798803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.798820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.798831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.798849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.798859 | orchestrator | 2025-07-05 23:22:09.798869 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-07-05 23:22:09.798880 | orchestrator | Saturday 05 July 2025 23:20:07 +0000 (0:00:05.296) 0:02:40.066 ********* 2025-07-05 23:22:09.798890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:22:09.798900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:22:09.798911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.798931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.798942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:22:09.798952 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:22:09.798969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:22:09.798979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:22:09.798990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.799000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.799020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:22:09.799030 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:22:09.799041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:22:09.799057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:22:09.799067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.799078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.799088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:22:09.799104 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:22:09.799114 | orchestrator | 2025-07-05 23:22:09.799124 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-07-05 23:22:09.799135 | orchestrator | Saturday 05 July 2025 23:20:08 +0000 (0:00:00.730) 0:02:40.797 ********* 2025-07-05 23:22:09.799150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:22:09.799160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:22:09.799171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.799193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.799210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:22:09.799227 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:22:09.799245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:22:09.799282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:22:09.799300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.799317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.799342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:22:09.799358 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:22:09.799374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-05 23:22:09.799398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-05 23:22:09.799422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.799440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-05 23:22:09.799458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-05 23:22:09.799474 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:22:09.799490 | orchestrator | 2025-07-05 23:22:09.799507 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-07-05 23:22:09.799524 | orchestrator | Saturday 05 July 2025 23:20:09 +0000 (0:00:00.973) 0:02:41.771 ********* 2025-07-05 23:22:09.799590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.799610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.799643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.799659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.799675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.799691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.799715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.799730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.799753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.799776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.799793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.799807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.799829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.799844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.799867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.799882 | orchestrator | 2025-07-05 23:22:09.799897 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-07-05 23:22:09.799913 | orchestrator | Saturday 05 July 2025 23:20:14 +0000 (0:00:05.606) 0:02:47.377 ********* 2025-07-05 23:22:09.799929 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-05 23:22:09.799945 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-05 23:22:09.799962 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-05 23:22:09.799979 | orchestrator | 2025-07-05 23:22:09.799995 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-07-05 23:22:09.800012 | orchestrator | Saturday 05 July 2025 23:20:16 +0000 (0:00:01.605) 0:02:48.983 ********* 2025-07-05 23:22:09.800029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.800084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.800113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.800140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.800156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.800171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.800193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.800209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.800226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.800259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.800277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.800295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.800312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.800335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.800353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.800370 | orchestrator | 2025-07-05 23:22:09.800386 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-07-05 23:22:09.800402 | orchestrator | Saturday 05 July 2025 23:20:32 +0000 (0:00:16.092) 0:03:05.075 ********* 2025-07-05 23:22:09.800418 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.800436 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.800462 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.800479 | orchestrator | 2025-07-05 23:22:09.800495 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-07-05 23:22:09.800509 | orchestrator | Saturday 05 July 2025 23:20:34 +0000 (0:00:01.484) 0:03:06.560 ********* 2025-07-05 23:22:09.800522 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-05 23:22:09.800536 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-05 23:22:09.800589 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-05 23:22:09.800606 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-05 23:22:09.800620 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-05 23:22:09.800636 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-05 23:22:09.800649 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-05 23:22:09.800665 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-05 23:22:09.800681 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-05 23:22:09.800698 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-05 23:22:09.800714 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-05 23:22:09.800731 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-05 23:22:09.800747 | orchestrator | 2025-07-05 23:22:09.800763 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-07-05 23:22:09.800780 | orchestrator | Saturday 05 July 2025 23:20:39 +0000 (0:00:05.306) 0:03:11.866 ********* 2025-07-05 23:22:09.800794 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-05 23:22:09.800809 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-05 23:22:09.800823 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-05 23:22:09.800838 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-05 23:22:09.800854 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-05 23:22:09.800868 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-05 23:22:09.800883 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-05 23:22:09.800898 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-05 23:22:09.800912 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-05 23:22:09.800927 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-05 23:22:09.800941 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-05 23:22:09.800956 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-05 23:22:09.800971 | orchestrator | 2025-07-05 23:22:09.800986 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-07-05 23:22:09.801002 | orchestrator | Saturday 05 July 2025 23:20:44 +0000 (0:00:05.473) 0:03:17.340 ********* 2025-07-05 23:22:09.801018 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-05 23:22:09.801034 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-05 23:22:09.801052 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-05 23:22:09.801067 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-05 23:22:09.801084 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-05 23:22:09.801099 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-05 23:22:09.801113 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-05 23:22:09.801128 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-05 23:22:09.801151 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-05 23:22:09.801166 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-05 23:22:09.801197 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-05 23:22:09.801212 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-05 23:22:09.801226 | orchestrator | 2025-07-05 23:22:09.801242 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-07-05 23:22:09.801256 | orchestrator | Saturday 05 July 2025 23:20:49 +0000 (0:00:05.087) 0:03:22.427 ********* 2025-07-05 23:22:09.801272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.801300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.801316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-05 23:22:09.801333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.801355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.801379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-05 23:22:09.801395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.801417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.801433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.801449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.801464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.801486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-05 23:22:09.801510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.801526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.801614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-05 23:22:09.801636 | orchestrator | 2025-07-05 23:22:09.801653 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-05 23:22:09.801670 | orchestrator | Saturday 05 July 2025 23:20:53 +0000 (0:00:03.816) 0:03:26.244 ********* 2025-07-05 23:22:09.801686 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:22:09.801703 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:22:09.801718 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:22:09.801734 | orchestrator | 2025-07-05 23:22:09.801752 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-07-05 23:22:09.801768 | orchestrator | Saturday 05 July 2025 23:20:54 +0000 (0:00:00.304) 0:03:26.548 ********* 2025-07-05 23:22:09.801784 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.801801 | orchestrator | 2025-07-05 23:22:09.801817 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-07-05 23:22:09.801833 | orchestrator | Saturday 05 July 2025 23:20:56 +0000 (0:00:02.104) 0:03:28.653 ********* 2025-07-05 23:22:09.801848 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.801865 | orchestrator | 2025-07-05 23:22:09.801880 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-07-05 23:22:09.801897 | orchestrator | Saturday 05 July 2025 23:20:58 +0000 (0:00:02.417) 0:03:31.071 ********* 2025-07-05 23:22:09.801913 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.801926 | orchestrator | 2025-07-05 23:22:09.801940 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-07-05 23:22:09.801949 | orchestrator | Saturday 05 July 2025 23:21:00 +0000 (0:00:02.076) 0:03:33.147 ********* 2025-07-05 23:22:09.801965 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.801973 | orchestrator | 2025-07-05 23:22:09.801981 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-07-05 23:22:09.801989 | orchestrator | Saturday 05 July 2025 23:21:02 +0000 (0:00:02.180) 0:03:35.327 ********* 2025-07-05 23:22:09.801997 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.802005 | orchestrator | 2025-07-05 23:22:09.802013 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-05 23:22:09.802071 | orchestrator | Saturday 05 July 2025 23:21:22 +0000 (0:00:20.003) 0:03:55.330 ********* 2025-07-05 23:22:09.802092 | orchestrator | 2025-07-05 23:22:09.802107 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-05 23:22:09.802119 | orchestrator | Saturday 05 July 2025 23:21:22 +0000 (0:00:00.067) 0:03:55.397 ********* 2025-07-05 23:22:09.802133 | orchestrator | 2025-07-05 23:22:09.802147 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-05 23:22:09.802159 | orchestrator | Saturday 05 July 2025 23:21:22 +0000 (0:00:00.066) 0:03:55.464 ********* 2025-07-05 23:22:09.802174 | orchestrator | 2025-07-05 23:22:09.802186 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-07-05 23:22:09.802194 | orchestrator | Saturday 05 July 2025 23:21:23 +0000 (0:00:00.067) 0:03:55.532 ********* 2025-07-05 23:22:09.802202 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.802210 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.802218 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.802226 | orchestrator | 2025-07-05 23:22:09.802234 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-07-05 23:22:09.802248 | orchestrator | Saturday 05 July 2025 23:21:34 +0000 (0:00:11.813) 0:04:07.345 ********* 2025-07-05 23:22:09.802256 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.802264 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.802273 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.802280 | orchestrator | 2025-07-05 23:22:09.802288 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-07-05 23:22:09.802296 | orchestrator | Saturday 05 July 2025 23:21:41 +0000 (0:00:06.511) 0:04:13.857 ********* 2025-07-05 23:22:09.802304 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.802312 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.802324 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.802343 | orchestrator | 2025-07-05 23:22:09.802360 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-07-05 23:22:09.802373 | orchestrator | Saturday 05 July 2025 23:21:51 +0000 (0:00:10.454) 0:04:24.311 ********* 2025-07-05 23:22:09.802385 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.802397 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.802410 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.802422 | orchestrator | 2025-07-05 23:22:09.802434 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-07-05 23:22:09.802447 | orchestrator | Saturday 05 July 2025 23:21:57 +0000 (0:00:05.287) 0:04:29.599 ********* 2025-07-05 23:22:09.802459 | orchestrator | changed: [testbed-node-2] 2025-07-05 23:22:09.802472 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:22:09.802484 | orchestrator | changed: [testbed-node-1] 2025-07-05 23:22:09.802497 | orchestrator | 2025-07-05 23:22:09.802509 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:22:09.802521 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-05 23:22:09.802534 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-05 23:22:09.802600 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-05 23:22:09.802625 | orchestrator | 2025-07-05 23:22:09.802638 | orchestrator | 2025-07-05 23:22:09.802650 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:22:09.802662 | orchestrator | Saturday 05 July 2025 23:22:07 +0000 (0:00:10.616) 0:04:40.216 ********* 2025-07-05 23:22:09.802683 | orchestrator | =============================================================================== 2025-07-05 23:22:09.802696 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.00s 2025-07-05 23:22:09.802708 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.55s 2025-07-05 23:22:09.802720 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.09s 2025-07-05 23:22:09.802732 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.63s 2025-07-05 23:22:09.802745 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.81s 2025-07-05 23:22:09.802757 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.87s 2025-07-05 23:22:09.802769 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.62s 2025-07-05 23:22:09.802781 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.45s 2025-07-05 23:22:09.802793 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.49s 2025-07-05 23:22:09.802805 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.56s 2025-07-05 23:22:09.802818 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.09s 2025-07-05 23:22:09.802830 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.68s 2025-07-05 23:22:09.802842 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.51s 2025-07-05 23:22:09.802854 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.93s 2025-07-05 23:22:09.802866 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.61s 2025-07-05 23:22:09.802878 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.54s 2025-07-05 23:22:09.802890 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.47s 2025-07-05 23:22:09.802902 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.44s 2025-07-05 23:22:09.802915 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.31s 2025-07-05 23:22:09.802927 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.30s 2025-07-05 23:22:09.802939 | orchestrator | 2025-07-05 23:22:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:12.834270 | orchestrator | 2025-07-05 23:22:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:15.874633 | orchestrator | 2025-07-05 23:22:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:18.911290 | orchestrator | 2025-07-05 23:22:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:21.946464 | orchestrator | 2025-07-05 23:22:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:24.981015 | orchestrator | 2025-07-05 23:22:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:28.021922 | orchestrator | 2025-07-05 23:22:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:31.061398 | orchestrator | 2025-07-05 23:22:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:34.097663 | orchestrator | 2025-07-05 23:22:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:37.136492 | orchestrator | 2025-07-05 23:22:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:40.177478 | orchestrator | 2025-07-05 23:22:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:43.222163 | orchestrator | 2025-07-05 23:22:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:46.262871 | orchestrator | 2025-07-05 23:22:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:49.301449 | orchestrator | 2025-07-05 23:22:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:52.342165 | orchestrator | 2025-07-05 23:22:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:55.382949 | orchestrator | 2025-07-05 23:22:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:22:58.422470 | orchestrator | 2025-07-05 23:22:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:23:01.465296 | orchestrator | 2025-07-05 23:23:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:23:04.506700 | orchestrator | 2025-07-05 23:23:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:23:07.548632 | orchestrator | 2025-07-05 23:23:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-05 23:23:10.594953 | orchestrator | 2025-07-05 23:23:10.866312 | orchestrator | 2025-07-05 23:23:10.868088 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Jul 5 23:23:10 UTC 2025 2025-07-05 23:23:10.868136 | orchestrator | 2025-07-05 23:23:11.316685 | orchestrator | ok: Runtime: 0:34:35.374402 2025-07-05 23:23:11.585634 | 2025-07-05 23:23:11.585901 | TASK [Bootstrap services] 2025-07-05 23:23:12.270823 | orchestrator | 2025-07-05 23:23:12.271025 | orchestrator | # BOOTSTRAP 2025-07-05 23:23:12.271055 | orchestrator | 2025-07-05 23:23:12.271070 | orchestrator | + set -e 2025-07-05 23:23:12.271083 | orchestrator | + echo 2025-07-05 23:23:12.271097 | orchestrator | + echo '# BOOTSTRAP' 2025-07-05 23:23:12.271115 | orchestrator | + echo 2025-07-05 23:23:12.271159 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-07-05 23:23:12.280602 | orchestrator | + set -e 2025-07-05 23:23:12.280823 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-07-05 23:23:15.356860 | orchestrator | 2025-07-05 23:23:15 | INFO  | It takes a moment until task 97cca4ba-bc88-49b5-a4c5-548d098e217d (flavor-manager) has been started and output is visible here. 2025-07-05 23:23:23.192585 | orchestrator | 2025-07-05 23:23:19 | INFO  | Flavor SCS-1V-4 created 2025-07-05 23:23:23.192732 | orchestrator | 2025-07-05 23:23:19 | INFO  | Flavor SCS-2V-8 created 2025-07-05 23:23:23.192770 | orchestrator | 2025-07-05 23:23:19 | INFO  | Flavor SCS-4V-16 created 2025-07-05 23:23:23.192795 | orchestrator | 2025-07-05 23:23:19 | INFO  | Flavor SCS-8V-32 created 2025-07-05 23:23:23.192815 | orchestrator | 2025-07-05 23:23:19 | INFO  | Flavor SCS-1V-2 created 2025-07-05 23:23:23.192833 | orchestrator | 2025-07-05 23:23:19 | INFO  | Flavor SCS-2V-4 created 2025-07-05 23:23:23.192850 | orchestrator | 2025-07-05 23:23:20 | INFO  | Flavor SCS-4V-8 created 2025-07-05 23:23:23.192873 | orchestrator | 2025-07-05 23:23:20 | INFO  | Flavor SCS-8V-16 created 2025-07-05 23:23:23.192912 | orchestrator | 2025-07-05 23:23:20 | INFO  | Flavor SCS-16V-32 created 2025-07-05 23:23:23.192933 | orchestrator | 2025-07-05 23:23:20 | INFO  | Flavor SCS-1V-8 created 2025-07-05 23:23:23.192953 | orchestrator | 2025-07-05 23:23:20 | INFO  | Flavor SCS-2V-16 created 2025-07-05 23:23:23.192972 | orchestrator | 2025-07-05 23:23:20 | INFO  | Flavor SCS-4V-32 created 2025-07-05 23:23:23.192984 | orchestrator | 2025-07-05 23:23:20 | INFO  | Flavor SCS-1L-1 created 2025-07-05 23:23:23.192996 | orchestrator | 2025-07-05 23:23:21 | INFO  | Flavor SCS-2V-4-20s created 2025-07-05 23:23:23.193007 | orchestrator | 2025-07-05 23:23:21 | INFO  | Flavor SCS-4V-16-100s created 2025-07-05 23:23:23.193018 | orchestrator | 2025-07-05 23:23:21 | INFO  | Flavor SCS-1V-4-10 created 2025-07-05 23:23:23.193029 | orchestrator | 2025-07-05 23:23:21 | INFO  | Flavor SCS-2V-8-20 created 2025-07-05 23:23:23.193040 | orchestrator | 2025-07-05 23:23:21 | INFO  | Flavor SCS-4V-16-50 created 2025-07-05 23:23:23.193051 | orchestrator | 2025-07-05 23:23:21 | INFO  | Flavor SCS-8V-32-100 created 2025-07-05 23:23:23.193062 | orchestrator | 2025-07-05 23:23:21 | INFO  | Flavor SCS-1V-2-5 created 2025-07-05 23:23:23.193073 | orchestrator | 2025-07-05 23:23:22 | INFO  | Flavor SCS-2V-4-10 created 2025-07-05 23:23:23.193084 | orchestrator | 2025-07-05 23:23:22 | INFO  | Flavor SCS-4V-8-20 created 2025-07-05 23:23:23.193096 | orchestrator | 2025-07-05 23:23:22 | INFO  | Flavor SCS-8V-16-50 created 2025-07-05 23:23:23.193107 | orchestrator | 2025-07-05 23:23:22 | INFO  | Flavor SCS-16V-32-100 created 2025-07-05 23:23:23.193118 | orchestrator | 2025-07-05 23:23:22 | INFO  | Flavor SCS-1V-8-20 created 2025-07-05 23:23:23.193129 | orchestrator | 2025-07-05 23:23:22 | INFO  | Flavor SCS-2V-16-50 created 2025-07-05 23:23:23.193140 | orchestrator | 2025-07-05 23:23:22 | INFO  | Flavor SCS-4V-32-100 created 2025-07-05 23:23:23.193151 | orchestrator | 2025-07-05 23:23:22 | INFO  | Flavor SCS-1L-1-5 created 2025-07-05 23:23:25.271435 | orchestrator | 2025-07-05 23:23:25 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-07-05 23:23:35.377263 | orchestrator | 2025-07-05 23:23:35 | INFO  | Task 0a21abbb-dab5-436a-863f-3464c8f239f4 (bootstrap-basic) was prepared for execution. 2025-07-05 23:23:35.377389 | orchestrator | 2025-07-05 23:23:35 | INFO  | It takes a moment until task 0a21abbb-dab5-436a-863f-3464c8f239f4 (bootstrap-basic) has been started and output is visible here. 2025-07-05 23:24:33.879303 | orchestrator | 2025-07-05 23:24:33.879421 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-07-05 23:24:33.879439 | orchestrator | 2025-07-05 23:24:33.879451 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-05 23:24:33.879463 | orchestrator | Saturday 05 July 2025 23:23:39 +0000 (0:00:00.072) 0:00:00.072 ********* 2025-07-05 23:24:33.879474 | orchestrator | ok: [localhost] 2025-07-05 23:24:33.879486 | orchestrator | 2025-07-05 23:24:33.879498 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-07-05 23:24:33.879512 | orchestrator | Saturday 05 July 2025 23:23:41 +0000 (0:00:01.855) 0:00:01.928 ********* 2025-07-05 23:24:33.879523 | orchestrator | ok: [localhost] 2025-07-05 23:24:33.879534 | orchestrator | 2025-07-05 23:24:33.879545 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-07-05 23:24:33.879556 | orchestrator | Saturday 05 July 2025 23:23:49 +0000 (0:00:08.117) 0:00:10.045 ********* 2025-07-05 23:24:33.879602 | orchestrator | changed: [localhost] 2025-07-05 23:24:33.879624 | orchestrator | 2025-07-05 23:24:33.879643 | orchestrator | TASK [Get volume type local] *************************************************** 2025-07-05 23:24:33.879661 | orchestrator | Saturday 05 July 2025 23:23:57 +0000 (0:00:07.722) 0:00:17.767 ********* 2025-07-05 23:24:33.879676 | orchestrator | ok: [localhost] 2025-07-05 23:24:33.879688 | orchestrator | 2025-07-05 23:24:33.879699 | orchestrator | TASK [Create volume type local] ************************************************ 2025-07-05 23:24:33.879710 | orchestrator | Saturday 05 July 2025 23:24:04 +0000 (0:00:06.936) 0:00:24.704 ********* 2025-07-05 23:24:33.879721 | orchestrator | changed: [localhost] 2025-07-05 23:24:33.879737 | orchestrator | 2025-07-05 23:24:33.879748 | orchestrator | TASK [Create public network] *************************************************** 2025-07-05 23:24:33.879759 | orchestrator | Saturday 05 July 2025 23:24:10 +0000 (0:00:06.706) 0:00:31.410 ********* 2025-07-05 23:24:33.879770 | orchestrator | changed: [localhost] 2025-07-05 23:24:33.879781 | orchestrator | 2025-07-05 23:24:33.879791 | orchestrator | TASK [Set public network to default] ******************************************* 2025-07-05 23:24:33.879802 | orchestrator | Saturday 05 July 2025 23:24:15 +0000 (0:00:05.099) 0:00:36.510 ********* 2025-07-05 23:24:33.879813 | orchestrator | changed: [localhost] 2025-07-05 23:24:33.879826 | orchestrator | 2025-07-05 23:24:33.879848 | orchestrator | TASK [Create public subnet] **************************************************** 2025-07-05 23:24:33.879862 | orchestrator | Saturday 05 July 2025 23:24:22 +0000 (0:00:06.219) 0:00:42.730 ********* 2025-07-05 23:24:33.879874 | orchestrator | changed: [localhost] 2025-07-05 23:24:33.879886 | orchestrator | 2025-07-05 23:24:33.879898 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-07-05 23:24:33.879911 | orchestrator | Saturday 05 July 2025 23:24:26 +0000 (0:00:04.261) 0:00:46.991 ********* 2025-07-05 23:24:33.879922 | orchestrator | changed: [localhost] 2025-07-05 23:24:33.879936 | orchestrator | 2025-07-05 23:24:33.879948 | orchestrator | TASK [Create manager role] ***************************************************** 2025-07-05 23:24:33.879960 | orchestrator | Saturday 05 July 2025 23:24:30 +0000 (0:00:03.758) 0:00:50.749 ********* 2025-07-05 23:24:33.879972 | orchestrator | ok: [localhost] 2025-07-05 23:24:33.879985 | orchestrator | 2025-07-05 23:24:33.879997 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:24:33.880010 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:24:33.880024 | orchestrator | 2025-07-05 23:24:33.880037 | orchestrator | 2025-07-05 23:24:33.880049 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:24:33.880062 | orchestrator | Saturday 05 July 2025 23:24:33 +0000 (0:00:03.559) 0:00:54.308 ********* 2025-07-05 23:24:33.880102 | orchestrator | =============================================================================== 2025-07-05 23:24:33.880117 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.12s 2025-07-05 23:24:33.880129 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.72s 2025-07-05 23:24:33.880141 | orchestrator | Get volume type local --------------------------------------------------- 6.94s 2025-07-05 23:24:33.880154 | orchestrator | Create volume type local ------------------------------------------------ 6.71s 2025-07-05 23:24:33.880166 | orchestrator | Set public network to default ------------------------------------------- 6.22s 2025-07-05 23:24:33.880178 | orchestrator | Create public network --------------------------------------------------- 5.10s 2025-07-05 23:24:33.880189 | orchestrator | Create public subnet ---------------------------------------------------- 4.26s 2025-07-05 23:24:33.880200 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.76s 2025-07-05 23:24:33.880211 | orchestrator | Create manager role ----------------------------------------------------- 3.56s 2025-07-05 23:24:33.880221 | orchestrator | Gathering Facts --------------------------------------------------------- 1.86s 2025-07-05 23:24:35.962681 | orchestrator | 2025-07-05 23:24:35 | INFO  | It takes a moment until task 90fee179-4649-4a99-aec4-fb018f8cd52e (image-manager) has been started and output is visible here. 2025-07-05 23:25:16.930493 | orchestrator | 2025-07-05 23:24:39 | INFO  | Processing image 'Cirros 0.6.2' 2025-07-05 23:25:16.930649 | orchestrator | 2025-07-05 23:24:39 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-07-05 23:25:16.930670 | orchestrator | 2025-07-05 23:24:39 | INFO  | Importing image Cirros 0.6.2 2025-07-05 23:25:16.930682 | orchestrator | 2025-07-05 23:24:39 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-05 23:25:16.930695 | orchestrator | 2025-07-05 23:24:41 | INFO  | Waiting for image to leave queued state... 2025-07-05 23:25:16.930707 | orchestrator | 2025-07-05 23:24:43 | INFO  | Waiting for import to complete... 2025-07-05 23:25:16.930719 | orchestrator | 2025-07-05 23:24:53 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-07-05 23:25:16.930730 | orchestrator | 2025-07-05 23:24:53 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-07-05 23:25:16.930742 | orchestrator | 2025-07-05 23:24:53 | INFO  | Setting internal_version = 0.6.2 2025-07-05 23:25:16.930753 | orchestrator | 2025-07-05 23:24:53 | INFO  | Setting image_original_user = cirros 2025-07-05 23:25:16.930764 | orchestrator | 2025-07-05 23:24:53 | INFO  | Adding tag os:cirros 2025-07-05 23:25:16.930776 | orchestrator | 2025-07-05 23:24:54 | INFO  | Setting property architecture: x86_64 2025-07-05 23:25:16.930792 | orchestrator | 2025-07-05 23:24:54 | INFO  | Setting property hw_disk_bus: scsi 2025-07-05 23:25:16.930811 | orchestrator | 2025-07-05 23:24:54 | INFO  | Setting property hw_rng_model: virtio 2025-07-05 23:25:16.930829 | orchestrator | 2025-07-05 23:24:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-05 23:25:16.930847 | orchestrator | 2025-07-05 23:24:55 | INFO  | Setting property hw_watchdog_action: reset 2025-07-05 23:25:16.930865 | orchestrator | 2025-07-05 23:24:55 | INFO  | Setting property hypervisor_type: qemu 2025-07-05 23:25:16.930882 | orchestrator | 2025-07-05 23:24:55 | INFO  | Setting property os_distro: cirros 2025-07-05 23:25:16.930901 | orchestrator | 2025-07-05 23:24:55 | INFO  | Setting property replace_frequency: never 2025-07-05 23:25:16.930919 | orchestrator | 2025-07-05 23:24:56 | INFO  | Setting property uuid_validity: none 2025-07-05 23:25:16.930938 | orchestrator | 2025-07-05 23:24:56 | INFO  | Setting property provided_until: none 2025-07-05 23:25:16.930991 | orchestrator | 2025-07-05 23:24:56 | INFO  | Setting property image_description: Cirros 2025-07-05 23:25:16.931025 | orchestrator | 2025-07-05 23:24:56 | INFO  | Setting property image_name: Cirros 2025-07-05 23:25:16.931046 | orchestrator | 2025-07-05 23:24:57 | INFO  | Setting property internal_version: 0.6.2 2025-07-05 23:25:16.931066 | orchestrator | 2025-07-05 23:24:57 | INFO  | Setting property image_original_user: cirros 2025-07-05 23:25:16.931079 | orchestrator | 2025-07-05 23:24:57 | INFO  | Setting property os_version: 0.6.2 2025-07-05 23:25:16.931092 | orchestrator | 2025-07-05 23:24:57 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-05 23:25:16.931106 | orchestrator | 2025-07-05 23:24:57 | INFO  | Setting property image_build_date: 2023-05-30 2025-07-05 23:25:16.931119 | orchestrator | 2025-07-05 23:24:58 | INFO  | Checking status of 'Cirros 0.6.2' 2025-07-05 23:25:16.931131 | orchestrator | 2025-07-05 23:24:58 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-07-05 23:25:16.931144 | orchestrator | 2025-07-05 23:24:58 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-07-05 23:25:16.931156 | orchestrator | 2025-07-05 23:24:58 | INFO  | Processing image 'Cirros 0.6.3' 2025-07-05 23:25:16.931169 | orchestrator | 2025-07-05 23:24:58 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-07-05 23:25:16.931182 | orchestrator | 2025-07-05 23:24:58 | INFO  | Importing image Cirros 0.6.3 2025-07-05 23:25:16.931194 | orchestrator | 2025-07-05 23:24:58 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-05 23:25:16.931205 | orchestrator | 2025-07-05 23:24:59 | INFO  | Waiting for image to leave queued state... 2025-07-05 23:25:16.931216 | orchestrator | 2025-07-05 23:25:01 | INFO  | Waiting for import to complete... 2025-07-05 23:25:16.931227 | orchestrator | 2025-07-05 23:25:11 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-07-05 23:25:16.931258 | orchestrator | 2025-07-05 23:25:12 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-07-05 23:25:16.931269 | orchestrator | 2025-07-05 23:25:12 | INFO  | Setting internal_version = 0.6.3 2025-07-05 23:25:16.931281 | orchestrator | 2025-07-05 23:25:12 | INFO  | Setting image_original_user = cirros 2025-07-05 23:25:16.931292 | orchestrator | 2025-07-05 23:25:12 | INFO  | Adding tag os:cirros 2025-07-05 23:25:16.931303 | orchestrator | 2025-07-05 23:25:12 | INFO  | Setting property architecture: x86_64 2025-07-05 23:25:16.931314 | orchestrator | 2025-07-05 23:25:12 | INFO  | Setting property hw_disk_bus: scsi 2025-07-05 23:25:16.931325 | orchestrator | 2025-07-05 23:25:12 | INFO  | Setting property hw_rng_model: virtio 2025-07-05 23:25:16.931335 | orchestrator | 2025-07-05 23:25:13 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-05 23:25:16.931347 | orchestrator | 2025-07-05 23:25:13 | INFO  | Setting property hw_watchdog_action: reset 2025-07-05 23:25:16.931358 | orchestrator | 2025-07-05 23:25:13 | INFO  | Setting property hypervisor_type: qemu 2025-07-05 23:25:16.931369 | orchestrator | 2025-07-05 23:25:13 | INFO  | Setting property os_distro: cirros 2025-07-05 23:25:16.931380 | orchestrator | 2025-07-05 23:25:13 | INFO  | Setting property replace_frequency: never 2025-07-05 23:25:16.931391 | orchestrator | 2025-07-05 23:25:14 | INFO  | Setting property uuid_validity: none 2025-07-05 23:25:16.931412 | orchestrator | 2025-07-05 23:25:14 | INFO  | Setting property provided_until: none 2025-07-05 23:25:16.931423 | orchestrator | 2025-07-05 23:25:14 | INFO  | Setting property image_description: Cirros 2025-07-05 23:25:16.931434 | orchestrator | 2025-07-05 23:25:14 | INFO  | Setting property image_name: Cirros 2025-07-05 23:25:16.931445 | orchestrator | 2025-07-05 23:25:14 | INFO  | Setting property internal_version: 0.6.3 2025-07-05 23:25:16.931456 | orchestrator | 2025-07-05 23:25:15 | INFO  | Setting property image_original_user: cirros 2025-07-05 23:25:16.931467 | orchestrator | 2025-07-05 23:25:15 | INFO  | Setting property os_version: 0.6.3 2025-07-05 23:25:16.931478 | orchestrator | 2025-07-05 23:25:15 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-05 23:25:16.931489 | orchestrator | 2025-07-05 23:25:15 | INFO  | Setting property image_build_date: 2024-09-26 2025-07-05 23:25:16.931500 | orchestrator | 2025-07-05 23:25:16 | INFO  | Checking status of 'Cirros 0.6.3' 2025-07-05 23:25:16.931510 | orchestrator | 2025-07-05 23:25:16 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-07-05 23:25:16.931527 | orchestrator | 2025-07-05 23:25:16 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-07-05 23:25:17.198336 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-07-05 23:25:19.099753 | orchestrator | 2025-07-05 23:25:19 | INFO  | date: 2025-07-05 2025-07-05 23:25:19.099869 | orchestrator | 2025-07-05 23:25:19 | INFO  | image: octavia-amphora-haproxy-2024.2.20250705.qcow2 2025-07-05 23:25:19.099896 | orchestrator | 2025-07-05 23:25:19 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250705.qcow2 2025-07-05 23:25:19.100049 | orchestrator | 2025-07-05 23:25:19 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250705.qcow2.CHECKSUM 2025-07-05 23:25:19.141287 | orchestrator | 2025-07-05 23:25:19 | INFO  | checksum: d4cceb8a23aee4c4e530109ea869d1a2c423379c56f50213909a515832bf2a0e 2025-07-05 23:25:19.216984 | orchestrator | 2025-07-05 23:25:19 | INFO  | It takes a moment until task d96d2aad-fcae-4171-a631-912d897f694d (image-manager) has been started and output is visible here. 2025-07-05 23:26:19.582806 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-07-05 23:26:19.582932 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-07-05 23:26:19.582950 | orchestrator | 2025-07-05 23:25:21 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-07-05' 2025-07-05 23:26:19.582967 | orchestrator | 2025-07-05 23:25:21 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250705.qcow2: 200 2025-07-05 23:26:19.582981 | orchestrator | 2025-07-05 23:25:21 | INFO  | Importing image OpenStack Octavia Amphora 2025-07-05 2025-07-05 23:26:19.582992 | orchestrator | 2025-07-05 23:25:21 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250705.qcow2 2025-07-05 23:26:19.583006 | orchestrator | 2025-07-05 23:25:21 | INFO  | Waiting for image to leave queued state... 2025-07-05 23:26:19.583041 | orchestrator | 2025-07-05 23:25:23 | INFO  | Waiting for import to complete... 2025-07-05 23:26:19.583053 | orchestrator | 2025-07-05 23:25:34 | INFO  | Waiting for import to complete... 2025-07-05 23:26:19.583064 | orchestrator | 2025-07-05 23:25:44 | INFO  | Waiting for import to complete... 2025-07-05 23:26:19.583075 | orchestrator | 2025-07-05 23:25:54 | INFO  | Waiting for import to complete... 2025-07-05 23:26:19.583086 | orchestrator | 2025-07-05 23:26:04 | INFO  | Waiting for import to complete... 2025-07-05 23:26:19.583097 | orchestrator | 2025-07-05 23:26:14 | INFO  | Import of 'OpenStack Octavia Amphora 2025-07-05' successfully completed, reloading images 2025-07-05 23:26:19.583109 | orchestrator | 2025-07-05 23:26:14 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-07-05' 2025-07-05 23:26:19.583120 | orchestrator | 2025-07-05 23:26:14 | INFO  | Setting internal_version = 2025-07-05 2025-07-05 23:26:19.583132 | orchestrator | 2025-07-05 23:26:14 | INFO  | Setting image_original_user = ubuntu 2025-07-05 23:26:19.583143 | orchestrator | 2025-07-05 23:26:14 | INFO  | Adding tag amphora 2025-07-05 23:26:19.583154 | orchestrator | 2025-07-05 23:26:15 | INFO  | Adding tag os:ubuntu 2025-07-05 23:26:19.583165 | orchestrator | 2025-07-05 23:26:15 | INFO  | Setting property architecture: x86_64 2025-07-05 23:26:19.583176 | orchestrator | 2025-07-05 23:26:15 | INFO  | Setting property hw_disk_bus: scsi 2025-07-05 23:26:19.583187 | orchestrator | 2025-07-05 23:26:15 | INFO  | Setting property hw_rng_model: virtio 2025-07-05 23:26:19.583206 | orchestrator | 2025-07-05 23:26:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-05 23:26:19.583218 | orchestrator | 2025-07-05 23:26:16 | INFO  | Setting property hw_watchdog_action: reset 2025-07-05 23:26:19.583229 | orchestrator | 2025-07-05 23:26:16 | INFO  | Setting property hypervisor_type: qemu 2025-07-05 23:26:19.583240 | orchestrator | 2025-07-05 23:26:16 | INFO  | Setting property os_distro: ubuntu 2025-07-05 23:26:19.583250 | orchestrator | 2025-07-05 23:26:17 | INFO  | Setting property replace_frequency: quarterly 2025-07-05 23:26:19.583261 | orchestrator | 2025-07-05 23:26:17 | INFO  | Setting property uuid_validity: last-1 2025-07-05 23:26:19.583272 | orchestrator | 2025-07-05 23:26:17 | INFO  | Setting property provided_until: none 2025-07-05 23:26:19.583283 | orchestrator | 2025-07-05 23:26:17 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-07-05 23:26:19.583294 | orchestrator | 2025-07-05 23:26:17 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-07-05 23:26:19.583305 | orchestrator | 2025-07-05 23:26:18 | INFO  | Setting property internal_version: 2025-07-05 2025-07-05 23:26:19.583316 | orchestrator | 2025-07-05 23:26:18 | INFO  | Setting property image_original_user: ubuntu 2025-07-05 23:26:19.583327 | orchestrator | 2025-07-05 23:26:18 | INFO  | Setting property os_version: 2025-07-05 2025-07-05 23:26:19.583338 | orchestrator | 2025-07-05 23:26:18 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250705.qcow2 2025-07-05 23:26:19.583368 | orchestrator | 2025-07-05 23:26:18 | INFO  | Setting property image_build_date: 2025-07-05 2025-07-05 23:26:19.583380 | orchestrator | 2025-07-05 23:26:19 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-07-05' 2025-07-05 23:26:19.583391 | orchestrator | 2025-07-05 23:26:19 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-07-05' 2025-07-05 23:26:19.583411 | orchestrator | 2025-07-05 23:26:19 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-07-05 23:26:19.583423 | orchestrator | 2025-07-05 23:26:19 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-07-05 23:26:19.583435 | orchestrator | 2025-07-05 23:26:19 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-07-05 23:26:19.583446 | orchestrator | 2025-07-05 23:26:19 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-07-05 23:26:20.260330 | orchestrator | ok: Runtime: 0:03:07.985673 2025-07-05 23:26:20.325054 | 2025-07-05 23:26:20.325217 | TASK [Run checks] 2025-07-05 23:26:21.005405 | orchestrator | + set -e 2025-07-05 23:26:21.005654 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-05 23:26:21.005684 | orchestrator | ++ export INTERACTIVE=false 2025-07-05 23:26:21.005707 | orchestrator | ++ INTERACTIVE=false 2025-07-05 23:26:21.005722 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-05 23:26:21.005735 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-05 23:26:21.005750 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-05 23:26:21.006608 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-05 23:26:21.012750 | orchestrator | 2025-07-05 23:26:21.012783 | orchestrator | # CHECK 2025-07-05 23:26:21.012795 | orchestrator | 2025-07-05 23:26:21.012806 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-05 23:26:21.012822 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-05 23:26:21.012834 | orchestrator | + echo 2025-07-05 23:26:21.012845 | orchestrator | + echo '# CHECK' 2025-07-05 23:26:21.012857 | orchestrator | + echo 2025-07-05 23:26:21.012872 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-05 23:26:21.014378 | orchestrator | ++ semver latest 5.0.0 2025-07-05 23:26:21.054469 | orchestrator | 2025-07-05 23:26:21.054590 | orchestrator | ## Containers @ testbed-manager 2025-07-05 23:26:21.054606 | orchestrator | 2025-07-05 23:26:21.054621 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-05 23:26:21.054631 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-05 23:26:21.054642 | orchestrator | + echo 2025-07-05 23:26:21.054653 | orchestrator | + echo '## Containers @ testbed-manager' 2025-07-05 23:26:21.054663 | orchestrator | + echo 2025-07-05 23:26:21.054673 | orchestrator | + osism container testbed-manager ps 2025-07-05 23:26:23.176849 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-05 23:26:23.177008 | orchestrator | 797f23bfce21 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-07-05 23:26:23.177049 | orchestrator | 3d1a5348bc9a registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-07-05 23:26:23.177091 | orchestrator | 68b4a0635383 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_cadvisor 2025-07-05 23:26:23.177104 | orchestrator | 78f259b78f53 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-05 23:26:23.177116 | orchestrator | 9f2bca04ab26 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-07-05 23:26:23.177133 | orchestrator | f8454ec88a99 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2025-07-05 23:26:23.177145 | orchestrator | 450faa5849f3 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-07-05 23:26:23.177157 | orchestrator | fdb749d5208f registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-07-05 23:26:23.177168 | orchestrator | 02558bafef32 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-07-05 23:26:23.177207 | orchestrator | 7da9d54cb2b0 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2025-07-05 23:26:23.177219 | orchestrator | 1479db7984a5 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 30 minutes openstackclient 2025-07-05 23:26:23.177231 | orchestrator | 0e6404caa2b9 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 30 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2025-07-05 23:26:23.177242 | orchestrator | 96ce53b662a1 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 54 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-07-05 23:26:23.177254 | orchestrator | 589b0d99c6d3 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 58 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2025-07-05 23:26:23.177265 | orchestrator | 7888b7fdf29b registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-07-05 23:26:23.177296 | orchestrator | d63131564998 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-07-05 23:26:23.177314 | orchestrator | 1702578e3129 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-07-05 23:26:23.177326 | orchestrator | 218d0f2b51cf registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) osism-ansible 2025-07-05 23:26:23.177337 | orchestrator | e6fb82fba7bc registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 58 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-07-05 23:26:23.177348 | orchestrator | bc80f44eb201 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" 58 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2025-07-05 23:26:23.177360 | orchestrator | 29a586213687 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 39 minutes (healthy) manager-openstack-1 2025-07-05 23:26:23.177371 | orchestrator | 8ff4222ea9c4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 39 minutes (healthy) manager-listener-1 2025-07-05 23:26:23.177382 | orchestrator | 97c4d83a0674 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 39 minutes (healthy) manager-flower-1 2025-07-05 23:26:23.177393 | orchestrator | 9d744d6edd4a registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 58 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2025-07-05 23:26:23.177412 | orchestrator | 90ab65452451 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 58 minutes ago Up 39 minutes (healthy) osismclient 2025-07-05 23:26:23.177424 | orchestrator | a45675e53d73 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 39 minutes (healthy) manager-beat-1 2025-07-05 23:26:23.177435 | orchestrator | 656d3a9b313e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-07-05 23:26:23.177447 | orchestrator | 62079ccf50e9 registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-07-05 23:26:23.435488 | orchestrator | 2025-07-05 23:26:23.435632 | orchestrator | ## Images @ testbed-manager 2025-07-05 23:26:23.435651 | orchestrator | 2025-07-05 23:26:23.435663 | orchestrator | + echo 2025-07-05 23:26:23.435674 | orchestrator | + echo '## Images @ testbed-manager' 2025-07-05 23:26:23.435685 | orchestrator | + echo 2025-07-05 23:26:23.435695 | orchestrator | + osism container testbed-manager images 2025-07-05 23:26:25.500855 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-05 23:26:25.500961 | orchestrator | registry.osism.tech/osism/osism-ansible latest 35324f02d58d 3 hours ago 575MB 2025-07-05 23:26:25.500972 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest db1f3a9a978c 3 hours ago 307MB 2025-07-05 23:26:25.500981 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 5f9eb4c0083f 3 hours ago 1.21GB 2025-07-05 23:26:25.500989 | orchestrator | registry.osism.tech/osism/homer v25.05.2 9d08b78607ff 20 hours ago 11.5MB 2025-07-05 23:26:25.501021 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 3adf25d1d9be 20 hours ago 233MB 2025-07-05 23:26:25.501029 | orchestrator | registry.osism.tech/osism/cephclient reef 546237a02028 20 hours ago 453MB 2025-07-05 23:26:25.501036 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3b880ea0d69d 22 hours ago 318MB 2025-07-05 23:26:25.501044 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2c6fc6230e4c 22 hours ago 628MB 2025-07-05 23:26:25.501051 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ecce820a828 22 hours ago 746MB 2025-07-05 23:26:25.501059 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 b972a780ec0a 22 hours ago 456MB 2025-07-05 23:26:25.501066 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f96cde6e36b9 22 hours ago 410MB 2025-07-05 23:26:25.501073 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 6cc6fa57ba4a 22 hours ago 891MB 2025-07-05 23:26:25.501081 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 5e0088ce40ea 22 hours ago 360MB 2025-07-05 23:26:25.501088 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 103545d4fcfa 22 hours ago 358MB 2025-07-05 23:26:25.501095 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 750aae0c20ea 23 hours ago 571MB 2025-07-05 23:26:25.501102 | orchestrator | registry.osism.tech/osism/ceph-ansible reef bb4f425c1004 23 hours ago 535MB 2025-07-05 23:26:25.501110 | orchestrator | registry.osism.tech/osism/osism latest 08b4e265ff59 23 hours ago 310MB 2025-07-05 23:26:25.501137 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 9 days ago 226MB 2025-07-05 23:26:25.501145 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 7fb85a4198e9 3 weeks ago 329MB 2025-07-05 23:26:25.501152 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 5 weeks ago 41.4MB 2025-07-05 23:26:25.501159 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 5 months ago 571MB 2025-07-05 23:26:25.501167 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 10 months ago 300MB 2025-07-05 23:26:25.501174 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 13 months ago 146MB 2025-07-05 23:26:25.777442 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-05 23:26:25.778616 | orchestrator | ++ semver latest 5.0.0 2025-07-05 23:26:25.825827 | orchestrator | 2025-07-05 23:26:25.825918 | orchestrator | ## Containers @ testbed-node-0 2025-07-05 23:26:25.825930 | orchestrator | 2025-07-05 23:26:25.825939 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-05 23:26:25.825947 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-05 23:26:25.825955 | orchestrator | + echo 2025-07-05 23:26:25.825963 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-07-05 23:26:25.825971 | orchestrator | + echo 2025-07-05 23:26:25.825979 | orchestrator | + osism container testbed-node-0 ps 2025-07-05 23:26:28.103187 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-05 23:26:28.103310 | orchestrator | 176bacbedac2 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-05 23:26:28.103328 | orchestrator | a1240210f85e registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-05 23:26:28.103341 | orchestrator | 97339484a3fd registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-05 23:26:28.103353 | orchestrator | 42a0dc45d4ea registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-05 23:26:28.103364 | orchestrator | 60c12b4b44ff registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-05 23:26:28.103375 | orchestrator | c5bfdad09264 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-07-05 23:26:28.103387 | orchestrator | a437ad3a7a75 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-07-05 23:26:28.103420 | orchestrator | 1a77516a0685 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-07-05 23:26:28.103433 | orchestrator | 382978738f2a registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-07-05 23:26:28.103444 | orchestrator | 266783a3e584 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-05 23:26:28.103455 | orchestrator | 71db32049063 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-07-05 23:26:28.103466 | orchestrator | 7f0732e87b1f registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-07-05 23:26:28.103500 | orchestrator | 5defea26bf78 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-07-05 23:26:28.103512 | orchestrator | ceadde690d01 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-05 23:26:28.103523 | orchestrator | 40c1b3ded0dc registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_central 2025-07-05 23:26:28.103534 | orchestrator | 0b1fdbe75c0e registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-07-05 23:26:28.103546 | orchestrator | 765ef7109797 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-07-05 23:26:28.103589 | orchestrator | ae445cf14117 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-07-05 23:26:28.103601 | orchestrator | cdbc34aedb24 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-07-05 23:26:28.103612 | orchestrator | fb4142624b78 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-07-05 23:26:28.103624 | orchestrator | 9c4f00e0325b registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-07-05 23:26:28.103655 | orchestrator | 9a931757d38b registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-07-05 23:26:28.103667 | orchestrator | 0c4589c8c591 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-05 23:26:28.103678 | orchestrator | 6aa3bee5619e registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-07-05 23:26:28.103690 | orchestrator | a2a5762cc52d registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-07-05 23:26:28.103701 | orchestrator | 96632acb4f8d registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-07-05 23:26:28.103718 | orchestrator | 7b84802d35e2 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-05 23:26:28.103729 | orchestrator | 22d40c02bb14 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-07-05 23:26:28.103745 | orchestrator | b2b035e1f0f5 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-07-05 23:26:28.103757 | orchestrator | e74132cabcff registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-07-05 23:26:28.103768 | orchestrator | 58de8e73cd32 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-05 23:26:28.103779 | orchestrator | 812ae4797b1f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-07-05 23:26:28.103799 | orchestrator | 8337031214b7 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-07-05 23:26:28.103810 | orchestrator | ffc242b6b502 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-07-05 23:26:28.103821 | orchestrator | 3f6b1f4d155a registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-07-05 23:26:28.103832 | orchestrator | 80c8bfea3320 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-07-05 23:26:28.103842 | orchestrator | 74c736525cd9 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-07-05 23:26:28.103853 | orchestrator | abeeecab5a58 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-07-05 23:26:28.103864 | orchestrator | 5a2b34fd2323 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-07-05 23:26:28.103876 | orchestrator | 572e1a3d8319 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-07-05 23:26:28.103887 | orchestrator | 70fd6217f8bb registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-07-05 23:26:28.103898 | orchestrator | 74af5fbbec93 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-07-05 23:26:28.103909 | orchestrator | 1cce90361cab registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-07-05 23:26:28.103920 | orchestrator | 53a77bdcc6b1 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-07-05 23:26:28.103944 | orchestrator | 0ba1bc4c0066 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-07-05 23:26:28.103956 | orchestrator | 8ebf4f1bdaeb registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-07-05 23:26:28.103968 | orchestrator | fb3c8e23d888 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-07-05 23:26:28.103979 | orchestrator | 4c614e36319f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2025-07-05 23:26:28.103990 | orchestrator | 8b35351c2f39 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-07-05 23:26:28.104001 | orchestrator | e48e7ded10f9 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-07-05 23:26:28.104012 | orchestrator | aeae4c3c4dc9 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-07-05 23:26:28.104034 | orchestrator | feed3e8087c6 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-07-05 23:26:28.104045 | orchestrator | 2ab2d1d9faeb registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-07-05 23:26:28.104056 | orchestrator | d9ddf8b21f77 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-07-05 23:26:28.104067 | orchestrator | 28982e63a249 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-07-05 23:26:28.104078 | orchestrator | ac494226f697 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-05 23:26:28.104089 | orchestrator | 5b4bf9bbaf31 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-07-05 23:26:28.371057 | orchestrator | 2025-07-05 23:26:28.371168 | orchestrator | ## Images @ testbed-node-0 2025-07-05 23:26:28.371185 | orchestrator | 2025-07-05 23:26:28.371198 | orchestrator | + echo 2025-07-05 23:26:28.371211 | orchestrator | + echo '## Images @ testbed-node-0' 2025-07-05 23:26:28.371223 | orchestrator | + echo 2025-07-05 23:26:28.371234 | orchestrator | + osism container testbed-node-0 images 2025-07-05 23:26:30.515104 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-05 23:26:30.515217 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 9055708a294c 20 hours ago 1.27GB 2025-07-05 23:26:30.515231 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3b880ea0d69d 22 hours ago 318MB 2025-07-05 23:26:30.515243 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5ca950a6aa35 22 hours ago 329MB 2025-07-05 23:26:30.515255 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 ea9305476548 22 hours ago 375MB 2025-07-05 23:26:30.515266 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 731dfd5914b7 22 hours ago 1.59GB 2025-07-05 23:26:30.515277 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 bc567b132568 22 hours ago 1.55GB 2025-07-05 23:26:30.515288 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2c6fc6230e4c 22 hours ago 628MB 2025-07-05 23:26:30.515299 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 8a8dbfea0c7f 22 hours ago 417MB 2025-07-05 23:26:30.515309 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 0a0f53f728dd 22 hours ago 326MB 2025-07-05 23:26:30.515320 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e34f1fc4af9c 22 hours ago 318MB 2025-07-05 23:26:30.515331 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 a71957a80299 22 hours ago 1.01GB 2025-07-05 23:26:30.515343 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ecce820a828 22 hours ago 746MB 2025-07-05 23:26:30.515354 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 39537f282b82 22 hours ago 590MB 2025-07-05 23:26:30.515384 | orchestrator | registry.osism.tech/kolla/redis 2024.2 7ac1264b0ca6 22 hours ago 324MB 2025-07-05 23:26:30.515396 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 c7244cc5f2ed 22 hours ago 324MB 2025-07-05 23:26:30.515406 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 ee5ed8b90a46 22 hours ago 361MB 2025-07-05 23:26:30.515417 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 423103253094 22 hours ago 361MB 2025-07-05 23:26:30.515428 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 d4dc715adde8 22 hours ago 1.21GB 2025-07-05 23:26:30.515460 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 8dd9bb612c26 22 hours ago 351MB 2025-07-05 23:26:30.515472 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f96cde6e36b9 22 hours ago 410MB 2025-07-05 23:26:30.515482 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 103545d4fcfa 22 hours ago 358MB 2025-07-05 23:26:30.515493 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 7baf242f6c2c 22 hours ago 344MB 2025-07-05 23:26:30.515504 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 09607ef196ac 22 hours ago 353MB 2025-07-05 23:26:30.515515 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 d6ed4bc39c3e 22 hours ago 947MB 2025-07-05 23:26:30.515525 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 132ea9992fee 22 hours ago 946MB 2025-07-05 23:26:30.515536 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 e03d30ade7d2 22 hours ago 947MB 2025-07-05 23:26:30.515547 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 24fc945339f6 22 hours ago 946MB 2025-07-05 23:26:30.515591 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 6c5b5d5674ec 22 hours ago 1.13GB 2025-07-05 23:26:30.515602 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 09cef56aba0e 22 hours ago 1.11GB 2025-07-05 23:26:30.515613 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 748e29a9e086 22 hours ago 1.11GB 2025-07-05 23:26:30.515625 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 79babe44f5fe 22 hours ago 1.24GB 2025-07-05 23:26:30.515638 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 21adf6c8ba71 22 hours ago 1.2GB 2025-07-05 23:26:30.515651 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 44f2d1ff3f61 22 hours ago 1.31GB 2025-07-05 23:26:30.515664 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 084f1e6c65be 22 hours ago 1.12GB 2025-07-05 23:26:30.515676 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 6a949def6752 22 hours ago 1.1GB 2025-07-05 23:26:30.515688 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 2dfd73f10779 22 hours ago 1.12GB 2025-07-05 23:26:30.515721 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 2807d20ec51c 22 hours ago 1.1GB 2025-07-05 23:26:30.515735 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c335b4772e0a 22 hours ago 1.1GB 2025-07-05 23:26:30.515748 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 66d2b51a03b7 22 hours ago 1.04GB 2025-07-05 23:26:30.515760 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 429c5adfde4e 22 hours ago 1.04GB 2025-07-05 23:26:30.515774 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 f9fbc45d70b9 22 hours ago 1.15GB 2025-07-05 23:26:30.515786 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 fc2e39627d0e 22 hours ago 1.04GB 2025-07-05 23:26:30.515799 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6312ca888848 22 hours ago 1.06GB 2025-07-05 23:26:30.515812 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 e072c39c060c 22 hours ago 1.06GB 2025-07-05 23:26:30.515824 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 da1fbc66acaf 22 hours ago 1.06GB 2025-07-05 23:26:30.515837 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 10f347fc85ed 22 hours ago 1.29GB 2025-07-05 23:26:30.515850 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 eae115b69443 22 hours ago 1.29GB 2025-07-05 23:26:30.515862 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 f2c54cc3f283 22 hours ago 1.29GB 2025-07-05 23:26:30.515882 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 0447ea3a5786 22 hours ago 1.42GB 2025-07-05 23:26:30.515895 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 d2bf49c592ae 22 hours ago 1.41GB 2025-07-05 23:26:30.515908 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4c6fbb59f8dc 22 hours ago 1.41GB 2025-07-05 23:26:30.515921 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 c468af6279d5 22 hours ago 1.04GB 2025-07-05 23:26:30.515933 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 b08bfe97d0ab 22 hours ago 1.04GB 2025-07-05 23:26:30.515945 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 a5c3942598f8 22 hours ago 1.04GB 2025-07-05 23:26:30.515959 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 8afad6b7af4f 22 hours ago 1.04GB 2025-07-05 23:26:30.515972 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 252b2f877294 22 hours ago 1.06GB 2025-07-05 23:26:30.515984 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 bc0794dcfb86 22 hours ago 1.05GB 2025-07-05 23:26:30.515996 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 dcfcd3a6ed01 22 hours ago 1.05GB 2025-07-05 23:26:30.516007 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 75cedde1e470 22 hours ago 1.05GB 2025-07-05 23:26:30.516018 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 9846618f9e81 22 hours ago 1.06GB 2025-07-05 23:26:30.516028 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 6a915b04dc0f 22 hours ago 1.05GB 2025-07-05 23:26:30.516039 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 b3b4e1c04017 22 hours ago 1.11GB 2025-07-05 23:26:30.516050 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 70d6b454a1ed 22 hours ago 1.11GB 2025-07-05 23:26:30.801302 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-05 23:26:30.802265 | orchestrator | ++ semver latest 5.0.0 2025-07-05 23:26:30.858575 | orchestrator | 2025-07-05 23:26:30.858670 | orchestrator | ## Containers @ testbed-node-1 2025-07-05 23:26:30.858684 | orchestrator | 2025-07-05 23:26:30.858696 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-05 23:26:30.858707 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-05 23:26:30.858719 | orchestrator | + echo 2025-07-05 23:26:30.858738 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-07-05 23:26:30.858758 | orchestrator | + echo 2025-07-05 23:26:30.858775 | orchestrator | + osism container testbed-node-1 ps 2025-07-05 23:26:33.071524 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-05 23:26:33.071719 | orchestrator | 8e2ed28cbe5a registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-05 23:26:33.071737 | orchestrator | 2c30f65392c8 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-05 23:26:33.071767 | orchestrator | 001bad80f3d6 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-05 23:26:33.071778 | orchestrator | 7b31caea043b registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-05 23:26:33.071788 | orchestrator | 9a988fd058c8 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_api 2025-07-05 23:26:33.071798 | orchestrator | 25ab1098a204 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-07-05 23:26:33.071832 | orchestrator | fd89bd7d5e6f registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-07-05 23:26:33.071842 | orchestrator | d532aaa90cea registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-07-05 23:26:33.071852 | orchestrator | 873ca0b59f0a registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-07-05 23:26:33.071862 | orchestrator | a97c6f5f8bed registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-05 23:26:33.071871 | orchestrator | 51335b253e9c registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-07-05 23:26:33.071881 | orchestrator | da9c29ac5818 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-07-05 23:26:33.071891 | orchestrator | 99ede0e866ec registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-07-05 23:26:33.071901 | orchestrator | 3019c773c058 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-05 23:26:33.071910 | orchestrator | 4336eb20a7fd registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-07-05 23:26:33.071925 | orchestrator | 8af02a75963b registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-05 23:26:33.071935 | orchestrator | f84a40551b9d registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-07-05 23:26:33.071945 | orchestrator | 0d1b79172055 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-07-05 23:26:33.071955 | orchestrator | cd47fb2ab6e5 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-07-05 23:26:33.071964 | orchestrator | dd65ae5a9a9c registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-07-05 23:26:33.071975 | orchestrator | ade3dfbef791 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-07-05 23:26:33.072004 | orchestrator | b1ade7cd8189 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-05 23:26:33.072014 | orchestrator | 60b9de071d3e registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-07-05 23:26:33.072030 | orchestrator | a03c22bad497 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-07-05 23:26:33.072040 | orchestrator | bcc74b948a12 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-07-05 23:26:33.072058 | orchestrator | 67466a175fea registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-07-05 23:26:33.072068 | orchestrator | b32610cf4a2a registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-05 23:26:33.072078 | orchestrator | b50e6788e1cc registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-07-05 23:26:33.072090 | orchestrator | 581c75e0f9fc registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-07-05 23:26:33.072102 | orchestrator | 858b7ba0e156 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-07-05 23:26:33.072114 | orchestrator | 8a5fb00a5d79 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-05 23:26:33.072125 | orchestrator | abbf58fe0237 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-07-05 23:26:33.072136 | orchestrator | ed4fdfcf868d registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-07-05 23:26:33.072148 | orchestrator | 666dfd374985 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-07-05 23:26:33.072159 | orchestrator | 93223765c9ce registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-07-05 23:26:33.072169 | orchestrator | 6f8985368970 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-07-05 23:26:33.072179 | orchestrator | 47a703492970 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-07-05 23:26:33.072189 | orchestrator | 69aeee518d7b registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-07-05 23:26:33.072198 | orchestrator | 0f47e1e91e7d registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-07-05 23:26:33.072208 | orchestrator | 77dd48be317c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-07-05 23:26:33.072218 | orchestrator | 78ed796d996b registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-07-05 23:26:33.072228 | orchestrator | 6caab1d66624 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-07-05 23:26:33.072237 | orchestrator | 50dd97968120 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-07-05 23:26:33.072247 | orchestrator | 643da43db493 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-07-05 23:26:33.072264 | orchestrator | ff852b5de097 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-07-05 23:26:33.072281 | orchestrator | 9dab98a0829f registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-07-05 23:26:33.072291 | orchestrator | acd4a7fd57e8 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-07-05 23:26:33.072313 | orchestrator | aacac6303a6f registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-07-05 23:26:33.072324 | orchestrator | b4a33722bd06 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-07-05 23:26:33.072334 | orchestrator | ba02802aed51 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-07-05 23:26:33.072348 | orchestrator | 38ac9c93bd90 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-07-05 23:26:33.072358 | orchestrator | 33cf2b823c2f registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-07-05 23:26:33.072368 | orchestrator | c34ef1955f11 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-07-05 23:26:33.072378 | orchestrator | 4fdcdeee7a55 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-07-05 23:26:33.072387 | orchestrator | a18ea0cc00e9 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-07-05 23:26:33.072397 | orchestrator | a690750cf92d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-05 23:26:33.072407 | orchestrator | 0cac31174aea registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-07-05 23:26:33.350053 | orchestrator | 2025-07-05 23:26:33.350135 | orchestrator | ## Images @ testbed-node-1 2025-07-05 23:26:33.350144 | orchestrator | 2025-07-05 23:26:33.350150 | orchestrator | + echo 2025-07-05 23:26:33.350156 | orchestrator | + echo '## Images @ testbed-node-1' 2025-07-05 23:26:33.350163 | orchestrator | + echo 2025-07-05 23:26:33.350168 | orchestrator | + osism container testbed-node-1 images 2025-07-05 23:26:35.575098 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-05 23:26:35.575210 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 9055708a294c 20 hours ago 1.27GB 2025-07-05 23:26:35.575226 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3b880ea0d69d 22 hours ago 318MB 2025-07-05 23:26:35.575238 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5ca950a6aa35 22 hours ago 329MB 2025-07-05 23:26:35.575250 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 731dfd5914b7 22 hours ago 1.59GB 2025-07-05 23:26:35.575261 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 ea9305476548 22 hours ago 375MB 2025-07-05 23:26:35.575272 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 bc567b132568 22 hours ago 1.55GB 2025-07-05 23:26:35.575284 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2c6fc6230e4c 22 hours ago 628MB 2025-07-05 23:26:35.575295 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 8a8dbfea0c7f 22 hours ago 417MB 2025-07-05 23:26:35.575332 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 0a0f53f728dd 22 hours ago 326MB 2025-07-05 23:26:35.575343 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e34f1fc4af9c 22 hours ago 318MB 2025-07-05 23:26:35.575354 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 a71957a80299 22 hours ago 1.01GB 2025-07-05 23:26:35.575365 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ecce820a828 22 hours ago 746MB 2025-07-05 23:26:35.575376 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 39537f282b82 22 hours ago 590MB 2025-07-05 23:26:35.575387 | orchestrator | registry.osism.tech/kolla/redis 2024.2 7ac1264b0ca6 22 hours ago 324MB 2025-07-05 23:26:35.575398 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 c7244cc5f2ed 22 hours ago 324MB 2025-07-05 23:26:35.575408 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 423103253094 22 hours ago 361MB 2025-07-05 23:26:35.575419 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 ee5ed8b90a46 22 hours ago 361MB 2025-07-05 23:26:35.575430 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 d4dc715adde8 22 hours ago 1.21GB 2025-07-05 23:26:35.575440 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 8dd9bb612c26 22 hours ago 351MB 2025-07-05 23:26:35.575451 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f96cde6e36b9 22 hours ago 410MB 2025-07-05 23:26:35.575462 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 103545d4fcfa 22 hours ago 358MB 2025-07-05 23:26:35.575472 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 7baf242f6c2c 22 hours ago 344MB 2025-07-05 23:26:35.575483 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 09607ef196ac 22 hours ago 353MB 2025-07-05 23:26:35.575494 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 d6ed4bc39c3e 22 hours ago 947MB 2025-07-05 23:26:35.575504 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 e03d30ade7d2 22 hours ago 947MB 2025-07-05 23:26:35.575515 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 132ea9992fee 22 hours ago 946MB 2025-07-05 23:26:35.575545 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 24fc945339f6 22 hours ago 946MB 2025-07-05 23:26:35.575605 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 6c5b5d5674ec 22 hours ago 1.13GB 2025-07-05 23:26:35.575619 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 09cef56aba0e 22 hours ago 1.11GB 2025-07-05 23:26:35.575633 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 748e29a9e086 22 hours ago 1.11GB 2025-07-05 23:26:35.575645 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 79babe44f5fe 22 hours ago 1.24GB 2025-07-05 23:26:35.575658 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 21adf6c8ba71 22 hours ago 1.2GB 2025-07-05 23:26:35.575671 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 44f2d1ff3f61 22 hours ago 1.31GB 2025-07-05 23:26:35.575684 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 084f1e6c65be 22 hours ago 1.12GB 2025-07-05 23:26:35.575696 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 6a949def6752 22 hours ago 1.1GB 2025-07-05 23:26:35.575709 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 2dfd73f10779 22 hours ago 1.12GB 2025-07-05 23:26:35.575740 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 2807d20ec51c 22 hours ago 1.1GB 2025-07-05 23:26:35.575754 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c335b4772e0a 22 hours ago 1.1GB 2025-07-05 23:26:35.575776 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 f9fbc45d70b9 22 hours ago 1.15GB 2025-07-05 23:26:35.575790 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 fc2e39627d0e 22 hours ago 1.04GB 2025-07-05 23:26:35.575802 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6312ca888848 22 hours ago 1.06GB 2025-07-05 23:26:35.575815 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 e072c39c060c 22 hours ago 1.06GB 2025-07-05 23:26:35.575829 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 da1fbc66acaf 22 hours ago 1.06GB 2025-07-05 23:26:35.575841 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 10f347fc85ed 22 hours ago 1.29GB 2025-07-05 23:26:35.575854 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 eae115b69443 22 hours ago 1.29GB 2025-07-05 23:26:35.575867 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 f2c54cc3f283 22 hours ago 1.29GB 2025-07-05 23:26:35.575879 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 0447ea3a5786 22 hours ago 1.42GB 2025-07-05 23:26:35.575892 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 d2bf49c592ae 22 hours ago 1.41GB 2025-07-05 23:26:35.575905 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4c6fbb59f8dc 22 hours ago 1.41GB 2025-07-05 23:26:35.575918 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 252b2f877294 22 hours ago 1.06GB 2025-07-05 23:26:35.575931 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 bc0794dcfb86 22 hours ago 1.05GB 2025-07-05 23:26:35.575944 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 dcfcd3a6ed01 22 hours ago 1.05GB 2025-07-05 23:26:35.575955 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 75cedde1e470 22 hours ago 1.05GB 2025-07-05 23:26:35.575966 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 9846618f9e81 22 hours ago 1.06GB 2025-07-05 23:26:35.575977 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 6a915b04dc0f 22 hours ago 1.05GB 2025-07-05 23:26:35.829901 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-05 23:26:35.830353 | orchestrator | ++ semver latest 5.0.0 2025-07-05 23:26:35.880437 | orchestrator | 2025-07-05 23:26:35.880518 | orchestrator | ## Containers @ testbed-node-2 2025-07-05 23:26:35.880532 | orchestrator | 2025-07-05 23:26:35.880543 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-05 23:26:35.880584 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-05 23:26:35.880597 | orchestrator | + echo 2025-07-05 23:26:35.880608 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-07-05 23:26:35.880620 | orchestrator | + echo 2025-07-05 23:26:35.880631 | orchestrator | + osism container testbed-node-2 ps 2025-07-05 23:26:38.109153 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-05 23:26:38.109250 | orchestrator | 018ab4b4d58b registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-05 23:26:38.109264 | orchestrator | b914f5186f9b registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-05 23:26:38.109274 | orchestrator | a1391b6dcbd1 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-05 23:26:38.109284 | orchestrator | 3cb56d52ceba registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-05 23:26:38.109294 | orchestrator | d10441311fb0 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-05 23:26:38.109327 | orchestrator | b8f787162a77 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-07-05 23:26:38.109337 | orchestrator | c108a8c412e1 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-07-05 23:26:38.109347 | orchestrator | 9ff111cae3b1 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-07-05 23:26:38.109356 | orchestrator | 1c175711f40c registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-07-05 23:26:38.109383 | orchestrator | d093c06f9be2 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-05 23:26:38.109393 | orchestrator | 1c8d189002e0 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-07-05 23:26:38.109403 | orchestrator | 3ded586cd226 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-07-05 23:26:38.109413 | orchestrator | 623a71d00019 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-07-05 23:26:38.109424 | orchestrator | c298eb785a57 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-05 23:26:38.109434 | orchestrator | ae0e220fe8b3 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-07-05 23:26:38.109444 | orchestrator | a7e8251b2cfd registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-05 23:26:38.109454 | orchestrator | 2dddd96d3b46 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-07-05 23:26:38.109464 | orchestrator | 19f3c2585b6a registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-07-05 23:26:38.109475 | orchestrator | 300daf835303 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-07-05 23:26:38.109485 | orchestrator | b1ec730c9d43 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-07-05 23:26:38.109495 | orchestrator | bb4f0cec88d8 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-07-05 23:26:38.109523 | orchestrator | 7d37ff219d9c registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-05 23:26:38.109534 | orchestrator | ef7d447170cf registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-07-05 23:26:38.109544 | orchestrator | 7719aa5e29b9 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-07-05 23:26:38.109610 | orchestrator | 18a5faced688 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-07-05 23:26:38.109622 | orchestrator | 37248dad5fa5 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-07-05 23:26:38.109631 | orchestrator | d5b5606490ac registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-05 23:26:38.109641 | orchestrator | 6bef0fe342ee registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-07-05 23:26:38.109651 | orchestrator | 11370b381755 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-07-05 23:26:38.109660 | orchestrator | 85d3896b0043 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-07-05 23:26:38.109670 | orchestrator | 8fbc1578e85c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-05 23:26:38.109682 | orchestrator | 42ef62d5bdd2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-07-05 23:26:38.109693 | orchestrator | 1cea87e3fbfd registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-07-05 23:26:38.109704 | orchestrator | 499a24d944e6 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-07-05 23:26:38.109716 | orchestrator | 0d2d9decd6ad registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-07-05 23:26:38.109727 | orchestrator | b81163f3a8d6 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-07-05 23:26:38.109738 | orchestrator | 2d6fdd7c35ed registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-07-05 23:26:38.109749 | orchestrator | 0a3e8203b9a4 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-07-05 23:26:38.109762 | orchestrator | c178f17fb006 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-07-05 23:26:38.109774 | orchestrator | da9ba2fed7ab registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-07-05 23:26:38.109785 | orchestrator | e162d3368f0b registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-07-05 23:26:38.109795 | orchestrator | 5b8ddcb6eada registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-07-05 23:26:38.109810 | orchestrator | 08db6e3161de registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-07-05 23:26:38.109820 | orchestrator | 137e727f4b99 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-07-05 23:26:38.109844 | orchestrator | 9323f1898548 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-07-05 23:26:38.109854 | orchestrator | 1c42a59464a5 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-07-05 23:26:38.109864 | orchestrator | 10ab7ff681e4 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-07-05 23:26:38.109874 | orchestrator | e479803abf6d registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-07-05 23:26:38.109888 | orchestrator | d1323167fd5f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-07-05 23:26:38.109898 | orchestrator | 36e06a8c7eae registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-07-05 23:26:38.109908 | orchestrator | 2640d51e7de7 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-07-05 23:26:38.109918 | orchestrator | 11f4df9d6da5 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-07-05 23:26:38.109928 | orchestrator | 4e887ee2ce80 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-07-05 23:26:38.109938 | orchestrator | c86d9c61f6ab registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-07-05 23:26:38.109947 | orchestrator | 084bec3cb0c2 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-07-05 23:26:38.109957 | orchestrator | f3726b352ddd registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-05 23:26:38.109968 | orchestrator | f8a51d1af569 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-07-05 23:26:38.399855 | orchestrator | 2025-07-05 23:26:38.399956 | orchestrator | ## Images @ testbed-node-2 2025-07-05 23:26:38.399973 | orchestrator | 2025-07-05 23:26:38.400002 | orchestrator | + echo 2025-07-05 23:26:38.400014 | orchestrator | + echo '## Images @ testbed-node-2' 2025-07-05 23:26:38.400027 | orchestrator | + echo 2025-07-05 23:26:38.400038 | orchestrator | + osism container testbed-node-2 images 2025-07-05 23:26:40.566408 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-05 23:26:40.566515 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 9055708a294c 20 hours ago 1.27GB 2025-07-05 23:26:40.566529 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3b880ea0d69d 22 hours ago 318MB 2025-07-05 23:26:40.566541 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5ca950a6aa35 22 hours ago 329MB 2025-07-05 23:26:40.566616 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 731dfd5914b7 22 hours ago 1.59GB 2025-07-05 23:26:40.566631 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 ea9305476548 22 hours ago 375MB 2025-07-05 23:26:40.566642 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 bc567b132568 22 hours ago 1.55GB 2025-07-05 23:26:40.566653 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 8a8dbfea0c7f 22 hours ago 417MB 2025-07-05 23:26:40.566690 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2c6fc6230e4c 22 hours ago 628MB 2025-07-05 23:26:40.566702 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 0a0f53f728dd 22 hours ago 326MB 2025-07-05 23:26:40.566712 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e34f1fc4af9c 22 hours ago 318MB 2025-07-05 23:26:40.566724 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 a71957a80299 22 hours ago 1.01GB 2025-07-05 23:26:40.566734 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ecce820a828 22 hours ago 746MB 2025-07-05 23:26:40.566745 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 39537f282b82 22 hours ago 590MB 2025-07-05 23:26:40.566755 | orchestrator | registry.osism.tech/kolla/redis 2024.2 7ac1264b0ca6 22 hours ago 324MB 2025-07-05 23:26:40.566766 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 c7244cc5f2ed 22 hours ago 324MB 2025-07-05 23:26:40.566777 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 ee5ed8b90a46 22 hours ago 361MB 2025-07-05 23:26:40.566788 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 423103253094 22 hours ago 361MB 2025-07-05 23:26:40.566799 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 d4dc715adde8 22 hours ago 1.21GB 2025-07-05 23:26:40.566809 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 8dd9bb612c26 22 hours ago 351MB 2025-07-05 23:26:40.566820 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f96cde6e36b9 22 hours ago 410MB 2025-07-05 23:26:40.566831 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 103545d4fcfa 22 hours ago 358MB 2025-07-05 23:26:40.566842 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 7baf242f6c2c 22 hours ago 344MB 2025-07-05 23:26:40.566853 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 09607ef196ac 22 hours ago 353MB 2025-07-05 23:26:40.566864 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 d6ed4bc39c3e 22 hours ago 947MB 2025-07-05 23:26:40.566875 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 132ea9992fee 22 hours ago 946MB 2025-07-05 23:26:40.566885 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 e03d30ade7d2 22 hours ago 947MB 2025-07-05 23:26:40.566896 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 24fc945339f6 22 hours ago 946MB 2025-07-05 23:26:40.566907 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 6c5b5d5674ec 22 hours ago 1.13GB 2025-07-05 23:26:40.566918 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 09cef56aba0e 22 hours ago 1.11GB 2025-07-05 23:26:40.566930 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 748e29a9e086 22 hours ago 1.11GB 2025-07-05 23:26:40.566943 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 79babe44f5fe 22 hours ago 1.24GB 2025-07-05 23:26:40.566956 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 21adf6c8ba71 22 hours ago 1.2GB 2025-07-05 23:26:40.566968 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 44f2d1ff3f61 22 hours ago 1.31GB 2025-07-05 23:26:40.566980 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 084f1e6c65be 22 hours ago 1.12GB 2025-07-05 23:26:40.566993 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 6a949def6752 22 hours ago 1.1GB 2025-07-05 23:26:40.567005 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 2dfd73f10779 22 hours ago 1.12GB 2025-07-05 23:26:40.567053 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 2807d20ec51c 22 hours ago 1.1GB 2025-07-05 23:26:40.567076 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c335b4772e0a 22 hours ago 1.1GB 2025-07-05 23:26:40.567091 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 f9fbc45d70b9 22 hours ago 1.15GB 2025-07-05 23:26:40.567103 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 fc2e39627d0e 22 hours ago 1.04GB 2025-07-05 23:26:40.567115 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6312ca888848 22 hours ago 1.06GB 2025-07-05 23:26:40.567128 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 e072c39c060c 22 hours ago 1.06GB 2025-07-05 23:26:40.567140 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 da1fbc66acaf 22 hours ago 1.06GB 2025-07-05 23:26:40.567152 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 10f347fc85ed 22 hours ago 1.29GB 2025-07-05 23:26:40.567166 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 eae115b69443 22 hours ago 1.29GB 2025-07-05 23:26:40.567178 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 f2c54cc3f283 22 hours ago 1.29GB 2025-07-05 23:26:40.567190 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 0447ea3a5786 22 hours ago 1.42GB 2025-07-05 23:26:40.567203 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 d2bf49c592ae 22 hours ago 1.41GB 2025-07-05 23:26:40.567216 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4c6fbb59f8dc 22 hours ago 1.41GB 2025-07-05 23:26:40.567228 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 252b2f877294 22 hours ago 1.06GB 2025-07-05 23:26:40.567240 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 bc0794dcfb86 22 hours ago 1.05GB 2025-07-05 23:26:40.567253 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 dcfcd3a6ed01 22 hours ago 1.05GB 2025-07-05 23:26:40.567265 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 75cedde1e470 22 hours ago 1.05GB 2025-07-05 23:26:40.567278 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 9846618f9e81 22 hours ago 1.06GB 2025-07-05 23:26:40.567289 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 6a915b04dc0f 22 hours ago 1.05GB 2025-07-05 23:26:40.854363 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-07-05 23:26:40.859534 | orchestrator | + set -e 2025-07-05 23:26:40.859661 | orchestrator | + source /opt/manager-vars.sh 2025-07-05 23:26:40.860886 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-05 23:26:40.860913 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-05 23:26:40.860923 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-05 23:26:40.860934 | orchestrator | ++ CEPH_VERSION=reef 2025-07-05 23:26:40.860944 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-05 23:26:40.860955 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-05 23:26:40.860965 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-05 23:26:40.860975 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-05 23:26:40.860985 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-05 23:26:40.860995 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-05 23:26:40.861005 | orchestrator | ++ export ARA=false 2025-07-05 23:26:40.861015 | orchestrator | ++ ARA=false 2025-07-05 23:26:40.861044 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-05 23:26:40.861055 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-05 23:26:40.861065 | orchestrator | ++ export TEMPEST=false 2025-07-05 23:26:40.861079 | orchestrator | ++ TEMPEST=false 2025-07-05 23:26:40.861089 | orchestrator | ++ export IS_ZUUL=true 2025-07-05 23:26:40.861099 | orchestrator | ++ IS_ZUUL=true 2025-07-05 23:26:40.861109 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 23:26:40.861119 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 23:26:40.861129 | orchestrator | ++ export EXTERNAL_API=false 2025-07-05 23:26:40.861139 | orchestrator | ++ EXTERNAL_API=false 2025-07-05 23:26:40.861149 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-05 23:26:40.861159 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-05 23:26:40.861188 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-05 23:26:40.861199 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-05 23:26:40.861209 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-05 23:26:40.861218 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-05 23:26:40.861228 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-05 23:26:40.861239 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-07-05 23:26:40.869853 | orchestrator | + set -e 2025-07-05 23:26:40.870822 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-05 23:26:40.870862 | orchestrator | ++ export INTERACTIVE=false 2025-07-05 23:26:40.870875 | orchestrator | ++ INTERACTIVE=false 2025-07-05 23:26:40.870886 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-05 23:26:40.870897 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-05 23:26:40.870908 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-05 23:26:40.871158 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-05 23:26:40.873934 | orchestrator | 2025-07-05 23:26:40.873977 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-05 23:26:40.873991 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-05 23:26:40.874004 | orchestrator | + echo 2025-07-05 23:26:40.874063 | orchestrator | + echo '# Ceph status' 2025-07-05 23:26:40.874078 | orchestrator | # Ceph status 2025-07-05 23:26:40.874090 | orchestrator | 2025-07-05 23:26:40.874101 | orchestrator | + echo 2025-07-05 23:26:40.874112 | orchestrator | + ceph -s 2025-07-05 23:26:41.446004 | orchestrator | cluster: 2025-07-05 23:26:41.446156 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-07-05 23:26:41.446174 | orchestrator | health: HEALTH_OK 2025-07-05 23:26:41.446188 | orchestrator | 2025-07-05 23:26:41.446201 | orchestrator | services: 2025-07-05 23:26:41.446213 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-07-05 23:26:41.446227 | orchestrator | mgr: testbed-node-1(active, since 14m), standbys: testbed-node-2, testbed-node-0 2025-07-05 23:26:41.446240 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-07-05 23:26:41.446252 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 23m) 2025-07-05 23:26:41.446264 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-07-05 23:26:41.446276 | orchestrator | 2025-07-05 23:26:41.446288 | orchestrator | data: 2025-07-05 23:26:41.446300 | orchestrator | volumes: 1/1 healthy 2025-07-05 23:26:41.446312 | orchestrator | pools: 14 pools, 401 pgs 2025-07-05 23:26:41.446324 | orchestrator | objects: 524 objects, 2.2 GiB 2025-07-05 23:26:41.446335 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-07-05 23:26:41.446347 | orchestrator | pgs: 401 active+clean 2025-07-05 23:26:41.446359 | orchestrator | 2025-07-05 23:26:41.495786 | orchestrator | 2025-07-05 23:26:41.495880 | orchestrator | # Ceph versions 2025-07-05 23:26:41.495894 | orchestrator | 2025-07-05 23:26:41.495905 | orchestrator | + echo 2025-07-05 23:26:41.495918 | orchestrator | + echo '# Ceph versions' 2025-07-05 23:26:41.495930 | orchestrator | + echo 2025-07-05 23:26:41.495941 | orchestrator | + ceph versions 2025-07-05 23:26:42.076324 | orchestrator | { 2025-07-05 23:26:42.076429 | orchestrator | "mon": { 2025-07-05 23:26:42.076445 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-05 23:26:42.076459 | orchestrator | }, 2025-07-05 23:26:42.076471 | orchestrator | "mgr": { 2025-07-05 23:26:42.076482 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-05 23:26:42.076493 | orchestrator | }, 2025-07-05 23:26:42.076504 | orchestrator | "osd": { 2025-07-05 23:26:42.076515 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-07-05 23:26:42.076526 | orchestrator | }, 2025-07-05 23:26:42.076537 | orchestrator | "mds": { 2025-07-05 23:26:42.076605 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-05 23:26:42.076619 | orchestrator | }, 2025-07-05 23:26:42.076630 | orchestrator | "rgw": { 2025-07-05 23:26:42.076641 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-05 23:26:42.076652 | orchestrator | }, 2025-07-05 23:26:42.076663 | orchestrator | "overall": { 2025-07-05 23:26:42.076674 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-07-05 23:26:42.076686 | orchestrator | } 2025-07-05 23:26:42.076696 | orchestrator | } 2025-07-05 23:26:42.119625 | orchestrator | 2025-07-05 23:26:42.119724 | orchestrator | # Ceph OSD tree 2025-07-05 23:26:42.119739 | orchestrator | 2025-07-05 23:26:42.119782 | orchestrator | + echo 2025-07-05 23:26:42.119795 | orchestrator | + echo '# Ceph OSD tree' 2025-07-05 23:26:42.119807 | orchestrator | + echo 2025-07-05 23:26:42.119818 | orchestrator | + ceph osd df tree 2025-07-05 23:26:42.613478 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-07-05 23:26:42.613634 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-07-05 23:26:42.613653 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2025-07-05 23:26:42.613666 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.14 1.04 201 up osd.0 2025-07-05 23:26:42.613677 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.69 0.96 189 up osd.5 2025-07-05 23:26:42.613689 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-07-05 23:26:42.613700 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 74 MiB 18 GiB 7.45 1.26 203 up osd.2 2025-07-05 23:26:42.613711 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 896 MiB 827 MiB 1 KiB 70 MiB 19 GiB 4.38 0.74 189 up osd.4 2025-07-05 23:26:42.613722 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-07-05 23:26:42.613733 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.47 1.09 184 up osd.1 2025-07-05 23:26:42.613743 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.36 0.91 204 up osd.3 2025-07-05 23:26:42.613754 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-07-05 23:26:42.613766 | orchestrator | MIN/MAX VAR: 0.74/1.26 STDDEV: 0.95 2025-07-05 23:26:42.657518 | orchestrator | 2025-07-05 23:26:42.657660 | orchestrator | # Ceph monitor status 2025-07-05 23:26:42.657677 | orchestrator | 2025-07-05 23:26:42.657689 | orchestrator | + echo 2025-07-05 23:26:42.657702 | orchestrator | + echo '# Ceph monitor status' 2025-07-05 23:26:42.657713 | orchestrator | + echo 2025-07-05 23:26:42.657725 | orchestrator | + ceph mon stat 2025-07-05 23:26:43.242266 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-07-05 23:26:43.287751 | orchestrator | 2025-07-05 23:26:43.287839 | orchestrator | # Ceph quorum status 2025-07-05 23:26:43.287855 | orchestrator | 2025-07-05 23:26:43.287867 | orchestrator | + echo 2025-07-05 23:26:43.287879 | orchestrator | + echo '# Ceph quorum status' 2025-07-05 23:26:43.287890 | orchestrator | + echo 2025-07-05 23:26:43.288157 | orchestrator | + ceph quorum_status 2025-07-05 23:26:43.288385 | orchestrator | + jq 2025-07-05 23:26:43.919515 | orchestrator | { 2025-07-05 23:26:43.919876 | orchestrator | "election_epoch": 8, 2025-07-05 23:26:43.919903 | orchestrator | "quorum": [ 2025-07-05 23:26:43.919915 | orchestrator | 0, 2025-07-05 23:26:43.919927 | orchestrator | 1, 2025-07-05 23:26:43.919937 | orchestrator | 2 2025-07-05 23:26:43.919948 | orchestrator | ], 2025-07-05 23:26:43.919959 | orchestrator | "quorum_names": [ 2025-07-05 23:26:43.919970 | orchestrator | "testbed-node-0", 2025-07-05 23:26:43.919981 | orchestrator | "testbed-node-1", 2025-07-05 23:26:43.919992 | orchestrator | "testbed-node-2" 2025-07-05 23:26:43.920003 | orchestrator | ], 2025-07-05 23:26:43.920015 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-07-05 23:26:43.920026 | orchestrator | "quorum_age": 1627, 2025-07-05 23:26:43.920038 | orchestrator | "features": { 2025-07-05 23:26:43.920049 | orchestrator | "quorum_con": "4540138322906710015", 2025-07-05 23:26:43.920060 | orchestrator | "quorum_mon": [ 2025-07-05 23:26:43.920071 | orchestrator | "kraken", 2025-07-05 23:26:43.920082 | orchestrator | "luminous", 2025-07-05 23:26:43.920092 | orchestrator | "mimic", 2025-07-05 23:26:43.920128 | orchestrator | "osdmap-prune", 2025-07-05 23:26:43.920140 | orchestrator | "nautilus", 2025-07-05 23:26:43.920151 | orchestrator | "octopus", 2025-07-05 23:26:43.920161 | orchestrator | "pacific", 2025-07-05 23:26:43.920172 | orchestrator | "elector-pinging", 2025-07-05 23:26:43.920183 | orchestrator | "quincy", 2025-07-05 23:26:43.920194 | orchestrator | "reef" 2025-07-05 23:26:43.920205 | orchestrator | ] 2025-07-05 23:26:43.920216 | orchestrator | }, 2025-07-05 23:26:43.920227 | orchestrator | "monmap": { 2025-07-05 23:26:43.920238 | orchestrator | "epoch": 1, 2025-07-05 23:26:43.920250 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-07-05 23:26:43.920278 | orchestrator | "modified": "2025-07-05T22:59:19.488845Z", 2025-07-05 23:26:43.920289 | orchestrator | "created": "2025-07-05T22:59:19.488845Z", 2025-07-05 23:26:43.920300 | orchestrator | "min_mon_release": 18, 2025-07-05 23:26:43.920312 | orchestrator | "min_mon_release_name": "reef", 2025-07-05 23:26:43.920322 | orchestrator | "election_strategy": 1, 2025-07-05 23:26:43.920333 | orchestrator | "disallowed_leaders: ": "", 2025-07-05 23:26:43.920344 | orchestrator | "stretch_mode": false, 2025-07-05 23:26:43.920355 | orchestrator | "tiebreaker_mon": "", 2025-07-05 23:26:43.920365 | orchestrator | "removed_ranks: ": "", 2025-07-05 23:26:43.920376 | orchestrator | "features": { 2025-07-05 23:26:43.920387 | orchestrator | "persistent": [ 2025-07-05 23:26:43.920397 | orchestrator | "kraken", 2025-07-05 23:26:43.920408 | orchestrator | "luminous", 2025-07-05 23:26:43.920419 | orchestrator | "mimic", 2025-07-05 23:26:43.920429 | orchestrator | "osdmap-prune", 2025-07-05 23:26:43.920440 | orchestrator | "nautilus", 2025-07-05 23:26:43.920451 | orchestrator | "octopus", 2025-07-05 23:26:43.920462 | orchestrator | "pacific", 2025-07-05 23:26:43.920472 | orchestrator | "elector-pinging", 2025-07-05 23:26:43.920483 | orchestrator | "quincy", 2025-07-05 23:26:43.920494 | orchestrator | "reef" 2025-07-05 23:26:43.920507 | orchestrator | ], 2025-07-05 23:26:43.920520 | orchestrator | "optional": [] 2025-07-05 23:26:43.920533 | orchestrator | }, 2025-07-05 23:26:43.920545 | orchestrator | "mons": [ 2025-07-05 23:26:43.920588 | orchestrator | { 2025-07-05 23:26:43.920602 | orchestrator | "rank": 0, 2025-07-05 23:26:43.920615 | orchestrator | "name": "testbed-node-0", 2025-07-05 23:26:43.920629 | orchestrator | "public_addrs": { 2025-07-05 23:26:43.920642 | orchestrator | "addrvec": [ 2025-07-05 23:26:43.920655 | orchestrator | { 2025-07-05 23:26:43.920668 | orchestrator | "type": "v2", 2025-07-05 23:26:43.920681 | orchestrator | "addr": "192.168.16.10:3300", 2025-07-05 23:26:43.920695 | orchestrator | "nonce": 0 2025-07-05 23:26:43.920708 | orchestrator | }, 2025-07-05 23:26:43.920721 | orchestrator | { 2025-07-05 23:26:43.920734 | orchestrator | "type": "v1", 2025-07-05 23:26:43.920748 | orchestrator | "addr": "192.168.16.10:6789", 2025-07-05 23:26:43.920761 | orchestrator | "nonce": 0 2025-07-05 23:26:43.920778 | orchestrator | } 2025-07-05 23:26:43.920797 | orchestrator | ] 2025-07-05 23:26:43.920815 | orchestrator | }, 2025-07-05 23:26:43.920834 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-07-05 23:26:43.920847 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-07-05 23:26:43.920861 | orchestrator | "priority": 0, 2025-07-05 23:26:43.920875 | orchestrator | "weight": 0, 2025-07-05 23:26:43.920886 | orchestrator | "crush_location": "{}" 2025-07-05 23:26:43.920897 | orchestrator | }, 2025-07-05 23:26:43.920908 | orchestrator | { 2025-07-05 23:26:43.920918 | orchestrator | "rank": 1, 2025-07-05 23:26:43.920930 | orchestrator | "name": "testbed-node-1", 2025-07-05 23:26:43.920940 | orchestrator | "public_addrs": { 2025-07-05 23:26:43.920951 | orchestrator | "addrvec": [ 2025-07-05 23:26:43.920962 | orchestrator | { 2025-07-05 23:26:43.920973 | orchestrator | "type": "v2", 2025-07-05 23:26:43.920984 | orchestrator | "addr": "192.168.16.11:3300", 2025-07-05 23:26:43.920995 | orchestrator | "nonce": 0 2025-07-05 23:26:43.921005 | orchestrator | }, 2025-07-05 23:26:43.921016 | orchestrator | { 2025-07-05 23:26:43.921027 | orchestrator | "type": "v1", 2025-07-05 23:26:43.921038 | orchestrator | "addr": "192.168.16.11:6789", 2025-07-05 23:26:43.921048 | orchestrator | "nonce": 0 2025-07-05 23:26:43.921059 | orchestrator | } 2025-07-05 23:26:43.921070 | orchestrator | ] 2025-07-05 23:26:43.921081 | orchestrator | }, 2025-07-05 23:26:43.921092 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-07-05 23:26:43.921103 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-07-05 23:26:43.921122 | orchestrator | "priority": 0, 2025-07-05 23:26:43.921133 | orchestrator | "weight": 0, 2025-07-05 23:26:43.921144 | orchestrator | "crush_location": "{}" 2025-07-05 23:26:43.921155 | orchestrator | }, 2025-07-05 23:26:43.921166 | orchestrator | { 2025-07-05 23:26:43.921177 | orchestrator | "rank": 2, 2025-07-05 23:26:43.921187 | orchestrator | "name": "testbed-node-2", 2025-07-05 23:26:43.921198 | orchestrator | "public_addrs": { 2025-07-05 23:26:43.921209 | orchestrator | "addrvec": [ 2025-07-05 23:26:43.921220 | orchestrator | { 2025-07-05 23:26:43.921231 | orchestrator | "type": "v2", 2025-07-05 23:26:43.921242 | orchestrator | "addr": "192.168.16.12:3300", 2025-07-05 23:26:43.921252 | orchestrator | "nonce": 0 2025-07-05 23:26:43.921263 | orchestrator | }, 2025-07-05 23:26:43.921274 | orchestrator | { 2025-07-05 23:26:43.921285 | orchestrator | "type": "v1", 2025-07-05 23:26:43.921296 | orchestrator | "addr": "192.168.16.12:6789", 2025-07-05 23:26:43.921307 | orchestrator | "nonce": 0 2025-07-05 23:26:43.921318 | orchestrator | } 2025-07-05 23:26:43.921328 | orchestrator | ] 2025-07-05 23:26:43.921339 | orchestrator | }, 2025-07-05 23:26:43.921350 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-07-05 23:26:43.921361 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-07-05 23:26:43.921371 | orchestrator | "priority": 0, 2025-07-05 23:26:43.921382 | orchestrator | "weight": 0, 2025-07-05 23:26:43.921393 | orchestrator | "crush_location": "{}" 2025-07-05 23:26:43.921404 | orchestrator | } 2025-07-05 23:26:43.921415 | orchestrator | ] 2025-07-05 23:26:43.921426 | orchestrator | } 2025-07-05 23:26:43.921437 | orchestrator | } 2025-07-05 23:26:43.921462 | orchestrator | 2025-07-05 23:26:43.921473 | orchestrator | # Ceph free space status 2025-07-05 23:26:43.921484 | orchestrator | 2025-07-05 23:26:43.921495 | orchestrator | + echo 2025-07-05 23:26:43.921506 | orchestrator | + echo '# Ceph free space status' 2025-07-05 23:26:43.921518 | orchestrator | + echo 2025-07-05 23:26:43.921529 | orchestrator | + ceph df 2025-07-05 23:26:44.529330 | orchestrator | --- RAW STORAGE --- 2025-07-05 23:26:44.529426 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-07-05 23:26:44.529459 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-05 23:26:44.529469 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-05 23:26:44.529478 | orchestrator | 2025-07-05 23:26:44.529489 | orchestrator | --- POOLS --- 2025-07-05 23:26:44.529499 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-07-05 23:26:44.529509 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-07-05 23:26:44.529519 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-07-05 23:26:44.529528 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-07-05 23:26:44.529537 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-07-05 23:26:44.529546 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-07-05 23:26:44.529617 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-07-05 23:26:44.529627 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-07-05 23:26:44.529636 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-07-05 23:26:44.529645 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2025-07-05 23:26:44.529654 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-07-05 23:26:44.529662 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-07-05 23:26:44.529671 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2025-07-05 23:26:44.529680 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-07-05 23:26:44.529689 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-07-05 23:26:44.575051 | orchestrator | ++ semver latest 5.0.0 2025-07-05 23:26:44.630307 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-05 23:26:44.630411 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-05 23:26:44.630427 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-07-05 23:26:44.630439 | orchestrator | + osism apply facts 2025-07-05 23:26:56.617638 | orchestrator | 2025-07-05 23:26:56 | INFO  | Task 8dbc9e19-e76b-4b99-840b-96c0bb35d2a6 (facts) was prepared for execution. 2025-07-05 23:26:56.617752 | orchestrator | 2025-07-05 23:26:56 | INFO  | It takes a moment until task 8dbc9e19-e76b-4b99-840b-96c0bb35d2a6 (facts) has been started and output is visible here. 2025-07-05 23:27:10.447772 | orchestrator | 2025-07-05 23:27:10.447894 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-05 23:27:10.447910 | orchestrator | 2025-07-05 23:27:10.447923 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-05 23:27:10.447935 | orchestrator | Saturday 05 July 2025 23:27:00 +0000 (0:00:00.284) 0:00:00.285 ********* 2025-07-05 23:27:10.447946 | orchestrator | ok: [testbed-manager] 2025-07-05 23:27:10.447958 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:10.447970 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:27:10.447981 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:27:10.447992 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:27:10.448003 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:27:10.448014 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:27:10.448025 | orchestrator | 2025-07-05 23:27:10.448036 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-05 23:27:10.448048 | orchestrator | Saturday 05 July 2025 23:27:02 +0000 (0:00:01.506) 0:00:01.791 ********* 2025-07-05 23:27:10.448126 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:27:10.448140 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:10.448152 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:27:10.448163 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:27:10.448174 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:27:10.448185 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:27:10.448196 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:27:10.448207 | orchestrator | 2025-07-05 23:27:10.448218 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-05 23:27:10.448229 | orchestrator | 2025-07-05 23:27:10.448240 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-05 23:27:10.448251 | orchestrator | Saturday 05 July 2025 23:27:03 +0000 (0:00:01.315) 0:00:03.107 ********* 2025-07-05 23:27:10.448262 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:27:10.448273 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:10.448285 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:27:10.448296 | orchestrator | ok: [testbed-manager] 2025-07-05 23:27:10.448307 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:27:10.448320 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:27:10.448333 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:27:10.448345 | orchestrator | 2025-07-05 23:27:10.448358 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-05 23:27:10.448371 | orchestrator | 2025-07-05 23:27:10.448384 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-05 23:27:10.448397 | orchestrator | Saturday 05 July 2025 23:27:09 +0000 (0:00:05.917) 0:00:09.024 ********* 2025-07-05 23:27:10.448410 | orchestrator | skipping: [testbed-manager] 2025-07-05 23:27:10.448424 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:10.448437 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:27:10.448449 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:27:10.448462 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:27:10.448475 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:27:10.448487 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:27:10.448500 | orchestrator | 2025-07-05 23:27:10.448512 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:27:10.448525 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:27:10.448539 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:27:10.448621 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:27:10.448635 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:27:10.448648 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:27:10.448661 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:27:10.448674 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:27:10.448686 | orchestrator | 2025-07-05 23:27:10.448698 | orchestrator | 2025-07-05 23:27:10.448709 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:27:10.448720 | orchestrator | Saturday 05 July 2025 23:27:10 +0000 (0:00:00.542) 0:00:09.568 ********* 2025-07-05 23:27:10.448732 | orchestrator | =============================================================================== 2025-07-05 23:27:10.448742 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.92s 2025-07-05 23:27:10.448754 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.51s 2025-07-05 23:27:10.448765 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2025-07-05 23:27:10.448776 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-07-05 23:27:10.742000 | orchestrator | + osism validate ceph-mons 2025-07-05 23:27:41.816862 | orchestrator | 2025-07-05 23:27:41.816989 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-07-05 23:27:41.817014 | orchestrator | 2025-07-05 23:27:41.817035 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-05 23:27:41.817056 | orchestrator | Saturday 05 July 2025 23:27:26 +0000 (0:00:00.425) 0:00:00.425 ********* 2025-07-05 23:27:41.817075 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:27:41.817096 | orchestrator | 2025-07-05 23:27:41.817115 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-05 23:27:41.817132 | orchestrator | Saturday 05 July 2025 23:27:27 +0000 (0:00:00.630) 0:00:01.056 ********* 2025-07-05 23:27:41.817144 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:27:41.817154 | orchestrator | 2025-07-05 23:27:41.817165 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-05 23:27:41.817176 | orchestrator | Saturday 05 July 2025 23:27:28 +0000 (0:00:00.818) 0:00:01.875 ********* 2025-07-05 23:27:41.817207 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.817220 | orchestrator | 2025-07-05 23:27:41.817231 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-05 23:27:41.817242 | orchestrator | Saturday 05 July 2025 23:27:28 +0000 (0:00:00.241) 0:00:02.116 ********* 2025-07-05 23:27:41.817253 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.817264 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:27:41.817275 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:27:41.817286 | orchestrator | 2025-07-05 23:27:41.817297 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-05 23:27:41.817307 | orchestrator | Saturday 05 July 2025 23:27:28 +0000 (0:00:00.266) 0:00:02.382 ********* 2025-07-05 23:27:41.817318 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:27:41.817329 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.817340 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:27:41.817350 | orchestrator | 2025-07-05 23:27:41.817361 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-05 23:27:41.817376 | orchestrator | Saturday 05 July 2025 23:27:29 +0000 (0:00:00.972) 0:00:03.355 ********* 2025-07-05 23:27:41.817395 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.817445 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:27:41.817465 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:27:41.817483 | orchestrator | 2025-07-05 23:27:41.817501 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-05 23:27:41.817519 | orchestrator | Saturday 05 July 2025 23:27:30 +0000 (0:00:00.280) 0:00:03.635 ********* 2025-07-05 23:27:41.817567 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.817586 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:27:41.817605 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:27:41.817623 | orchestrator | 2025-07-05 23:27:41.817643 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-05 23:27:41.817662 | orchestrator | Saturday 05 July 2025 23:27:30 +0000 (0:00:00.496) 0:00:04.132 ********* 2025-07-05 23:27:41.817681 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.817699 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:27:41.817717 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:27:41.817737 | orchestrator | 2025-07-05 23:27:41.817756 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-07-05 23:27:41.817775 | orchestrator | Saturday 05 July 2025 23:27:30 +0000 (0:00:00.292) 0:00:04.424 ********* 2025-07-05 23:27:41.817788 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.817799 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:27:41.817810 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:27:41.817820 | orchestrator | 2025-07-05 23:27:41.817831 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-07-05 23:27:41.817842 | orchestrator | Saturday 05 July 2025 23:27:31 +0000 (0:00:00.305) 0:00:04.730 ********* 2025-07-05 23:27:41.817853 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.817863 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:27:41.817874 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:27:41.817885 | orchestrator | 2025-07-05 23:27:41.817896 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-05 23:27:41.817906 | orchestrator | Saturday 05 July 2025 23:27:31 +0000 (0:00:00.290) 0:00:05.020 ********* 2025-07-05 23:27:41.817917 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.817928 | orchestrator | 2025-07-05 23:27:41.817939 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-05 23:27:41.817949 | orchestrator | Saturday 05 July 2025 23:27:32 +0000 (0:00:00.647) 0:00:05.667 ********* 2025-07-05 23:27:41.817960 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.817971 | orchestrator | 2025-07-05 23:27:41.817981 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-05 23:27:41.817992 | orchestrator | Saturday 05 July 2025 23:27:32 +0000 (0:00:00.237) 0:00:05.904 ********* 2025-07-05 23:27:41.818002 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.818013 | orchestrator | 2025-07-05 23:27:41.818116 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:27:41.818138 | orchestrator | Saturday 05 July 2025 23:27:32 +0000 (0:00:00.242) 0:00:06.147 ********* 2025-07-05 23:27:41.818158 | orchestrator | 2025-07-05 23:27:41.818182 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:27:41.818212 | orchestrator | Saturday 05 July 2025 23:27:32 +0000 (0:00:00.071) 0:00:06.218 ********* 2025-07-05 23:27:41.818231 | orchestrator | 2025-07-05 23:27:41.818252 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:27:41.818272 | orchestrator | Saturday 05 July 2025 23:27:32 +0000 (0:00:00.071) 0:00:06.290 ********* 2025-07-05 23:27:41.818290 | orchestrator | 2025-07-05 23:27:41.818311 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-05 23:27:41.818331 | orchestrator | Saturday 05 July 2025 23:27:32 +0000 (0:00:00.070) 0:00:06.361 ********* 2025-07-05 23:27:41.818350 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.818369 | orchestrator | 2025-07-05 23:27:41.818388 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-05 23:27:41.818408 | orchestrator | Saturday 05 July 2025 23:27:33 +0000 (0:00:00.244) 0:00:06.606 ********* 2025-07-05 23:27:41.818459 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.818471 | orchestrator | 2025-07-05 23:27:41.818506 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-07-05 23:27:41.818518 | orchestrator | Saturday 05 July 2025 23:27:33 +0000 (0:00:00.246) 0:00:06.852 ********* 2025-07-05 23:27:41.818529 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.818571 | orchestrator | 2025-07-05 23:27:41.818588 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-07-05 23:27:41.818599 | orchestrator | Saturday 05 July 2025 23:27:33 +0000 (0:00:00.113) 0:00:06.965 ********* 2025-07-05 23:27:41.818610 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:27:41.818621 | orchestrator | 2025-07-05 23:27:41.818631 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-07-05 23:27:41.818642 | orchestrator | Saturday 05 July 2025 23:27:34 +0000 (0:00:01.556) 0:00:08.522 ********* 2025-07-05 23:27:41.818653 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.818664 | orchestrator | 2025-07-05 23:27:41.818675 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-07-05 23:27:41.818686 | orchestrator | Saturday 05 July 2025 23:27:35 +0000 (0:00:00.323) 0:00:08.846 ********* 2025-07-05 23:27:41.818696 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.818707 | orchestrator | 2025-07-05 23:27:41.818718 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-07-05 23:27:41.818729 | orchestrator | Saturday 05 July 2025 23:27:35 +0000 (0:00:00.316) 0:00:09.163 ********* 2025-07-05 23:27:41.818740 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.818751 | orchestrator | 2025-07-05 23:27:41.818762 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-07-05 23:27:41.818773 | orchestrator | Saturday 05 July 2025 23:27:35 +0000 (0:00:00.316) 0:00:09.479 ********* 2025-07-05 23:27:41.818784 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.818794 | orchestrator | 2025-07-05 23:27:41.818805 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-07-05 23:27:41.818816 | orchestrator | Saturday 05 July 2025 23:27:36 +0000 (0:00:00.301) 0:00:09.781 ********* 2025-07-05 23:27:41.818827 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.818838 | orchestrator | 2025-07-05 23:27:41.818864 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-07-05 23:27:41.818874 | orchestrator | Saturday 05 July 2025 23:27:36 +0000 (0:00:00.108) 0:00:09.889 ********* 2025-07-05 23:27:41.818885 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.818896 | orchestrator | 2025-07-05 23:27:41.818907 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-07-05 23:27:41.818918 | orchestrator | Saturday 05 July 2025 23:27:36 +0000 (0:00:00.128) 0:00:10.018 ********* 2025-07-05 23:27:41.818929 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.818939 | orchestrator | 2025-07-05 23:27:41.818950 | orchestrator | TASK [Gather status data] ****************************************************** 2025-07-05 23:27:41.818961 | orchestrator | Saturday 05 July 2025 23:27:36 +0000 (0:00:00.113) 0:00:10.131 ********* 2025-07-05 23:27:41.818972 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:27:41.818983 | orchestrator | 2025-07-05 23:27:41.818994 | orchestrator | TASK [Set health test data] **************************************************** 2025-07-05 23:27:41.819005 | orchestrator | Saturday 05 July 2025 23:27:37 +0000 (0:00:01.350) 0:00:11.482 ********* 2025-07-05 23:27:41.819016 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.819027 | orchestrator | 2025-07-05 23:27:41.819038 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-07-05 23:27:41.819048 | orchestrator | Saturday 05 July 2025 23:27:38 +0000 (0:00:00.316) 0:00:11.798 ********* 2025-07-05 23:27:41.819059 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.819070 | orchestrator | 2025-07-05 23:27:41.819081 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-07-05 23:27:41.819092 | orchestrator | Saturday 05 July 2025 23:27:38 +0000 (0:00:00.126) 0:00:11.924 ********* 2025-07-05 23:27:41.819110 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:27:41.819122 | orchestrator | 2025-07-05 23:27:41.819132 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-07-05 23:27:41.819143 | orchestrator | Saturday 05 July 2025 23:27:38 +0000 (0:00:00.144) 0:00:12.069 ********* 2025-07-05 23:27:41.819154 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.819165 | orchestrator | 2025-07-05 23:27:41.819176 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-07-05 23:27:41.819187 | orchestrator | Saturday 05 July 2025 23:27:38 +0000 (0:00:00.129) 0:00:12.199 ********* 2025-07-05 23:27:41.819197 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.819208 | orchestrator | 2025-07-05 23:27:41.819219 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-05 23:27:41.819230 | orchestrator | Saturday 05 July 2025 23:27:38 +0000 (0:00:00.315) 0:00:12.515 ********* 2025-07-05 23:27:41.819241 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:27:41.819251 | orchestrator | 2025-07-05 23:27:41.819262 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-05 23:27:41.819273 | orchestrator | Saturday 05 July 2025 23:27:39 +0000 (0:00:00.267) 0:00:12.782 ********* 2025-07-05 23:27:41.819284 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:27:41.819295 | orchestrator | 2025-07-05 23:27:41.819305 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-05 23:27:41.819316 | orchestrator | Saturday 05 July 2025 23:27:39 +0000 (0:00:00.244) 0:00:13.027 ********* 2025-07-05 23:27:41.819327 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:27:41.819338 | orchestrator | 2025-07-05 23:27:41.819349 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-05 23:27:41.819360 | orchestrator | Saturday 05 July 2025 23:27:41 +0000 (0:00:01.609) 0:00:14.637 ********* 2025-07-05 23:27:41.819370 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:27:41.819381 | orchestrator | 2025-07-05 23:27:41.819392 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-05 23:27:41.819402 | orchestrator | Saturday 05 July 2025 23:27:41 +0000 (0:00:00.262) 0:00:14.900 ********* 2025-07-05 23:27:41.819413 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:27:41.819424 | orchestrator | 2025-07-05 23:27:41.819442 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:27:44.148515 | orchestrator | Saturday 05 July 2025 23:27:41 +0000 (0:00:00.244) 0:00:15.144 ********* 2025-07-05 23:27:44.148667 | orchestrator | 2025-07-05 23:27:44.148688 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:27:44.148702 | orchestrator | Saturday 05 July 2025 23:27:41 +0000 (0:00:00.074) 0:00:15.219 ********* 2025-07-05 23:27:44.148713 | orchestrator | 2025-07-05 23:27:44.148725 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:27:44.148736 | orchestrator | Saturday 05 July 2025 23:27:41 +0000 (0:00:00.067) 0:00:15.287 ********* 2025-07-05 23:27:44.148746 | orchestrator | 2025-07-05 23:27:44.148757 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-05 23:27:44.148768 | orchestrator | Saturday 05 July 2025 23:27:41 +0000 (0:00:00.070) 0:00:15.357 ********* 2025-07-05 23:27:44.148780 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:27:44.148790 | orchestrator | 2025-07-05 23:27:44.148801 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-05 23:27:44.148812 | orchestrator | Saturday 05 July 2025 23:27:43 +0000 (0:00:01.519) 0:00:16.877 ********* 2025-07-05 23:27:44.148823 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-05 23:27:44.148834 | orchestrator |  "msg": [ 2025-07-05 23:27:44.148853 | orchestrator |  "Validator run completed.", 2025-07-05 23:27:44.148874 | orchestrator |  "You can find the report file here:", 2025-07-05 23:27:44.148930 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-07-05T23:27:27+00:00-report.json", 2025-07-05 23:27:44.148950 | orchestrator |  "on the following host:", 2025-07-05 23:27:44.148968 | orchestrator |  "testbed-manager" 2025-07-05 23:27:44.148986 | orchestrator |  ] 2025-07-05 23:27:44.149004 | orchestrator | } 2025-07-05 23:27:44.149023 | orchestrator | 2025-07-05 23:27:44.149042 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:27:44.149062 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-05 23:27:44.149081 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:27:44.149095 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:27:44.149108 | orchestrator | 2025-07-05 23:27:44.149120 | orchestrator | 2025-07-05 23:27:44.149132 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:27:44.149145 | orchestrator | Saturday 05 July 2025 23:27:43 +0000 (0:00:00.540) 0:00:17.418 ********* 2025-07-05 23:27:44.149165 | orchestrator | =============================================================================== 2025-07-05 23:27:44.149217 | orchestrator | Aggregate test results step one ----------------------------------------- 1.61s 2025-07-05 23:27:44.149237 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.56s 2025-07-05 23:27:44.149256 | orchestrator | Write report file ------------------------------------------------------- 1.52s 2025-07-05 23:27:44.149275 | orchestrator | Gather status data ------------------------------------------------------ 1.35s 2025-07-05 23:27:44.149295 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-07-05 23:27:44.149313 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2025-07-05 23:27:44.149332 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-07-05 23:27:44.149351 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-07-05 23:27:44.149370 | orchestrator | Print report file information ------------------------------------------- 0.54s 2025-07-05 23:27:44.149389 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-07-05 23:27:44.149408 | orchestrator | Set quorum test data ---------------------------------------------------- 0.32s 2025-07-05 23:27:44.149427 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2025-07-05 23:27:44.149445 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.32s 2025-07-05 23:27:44.149463 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2025-07-05 23:27:44.149481 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.32s 2025-07-05 23:27:44.149499 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2025-07-05 23:27:44.149518 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2025-07-05 23:27:44.149581 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2025-07-05 23:27:44.149601 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2025-07-05 23:27:44.149620 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-07-05 23:27:44.438928 | orchestrator | + osism validate ceph-mgrs 2025-07-05 23:28:15.046706 | orchestrator | 2025-07-05 23:28:15.046870 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-07-05 23:28:15.046892 | orchestrator | 2025-07-05 23:28:15.046907 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-05 23:28:15.046925 | orchestrator | Saturday 05 July 2025 23:28:00 +0000 (0:00:00.480) 0:00:00.480 ********* 2025-07-05 23:28:15.046968 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:15.046984 | orchestrator | 2025-07-05 23:28:15.047000 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-05 23:28:15.047016 | orchestrator | Saturday 05 July 2025 23:28:01 +0000 (0:00:00.660) 0:00:01.141 ********* 2025-07-05 23:28:15.047032 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:15.047047 | orchestrator | 2025-07-05 23:28:15.047063 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-05 23:28:15.047079 | orchestrator | Saturday 05 July 2025 23:28:02 +0000 (0:00:00.794) 0:00:01.935 ********* 2025-07-05 23:28:15.047094 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:28:15.047111 | orchestrator | 2025-07-05 23:28:15.047127 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-05 23:28:15.047143 | orchestrator | Saturday 05 July 2025 23:28:02 +0000 (0:00:00.242) 0:00:02.178 ********* 2025-07-05 23:28:15.047160 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:28:15.047176 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:28:15.047193 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:28:15.047209 | orchestrator | 2025-07-05 23:28:15.047226 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-05 23:28:15.047243 | orchestrator | Saturday 05 July 2025 23:28:02 +0000 (0:00:00.310) 0:00:02.488 ********* 2025-07-05 23:28:15.047259 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:28:15.047276 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:28:15.047293 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:28:15.047311 | orchestrator | 2025-07-05 23:28:15.047331 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-05 23:28:15.047349 | orchestrator | Saturday 05 July 2025 23:28:03 +0000 (0:00:00.983) 0:00:03.472 ********* 2025-07-05 23:28:15.047367 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:28:15.047384 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:28:15.047403 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:28:15.047421 | orchestrator | 2025-07-05 23:28:15.047440 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-05 23:28:15.047458 | orchestrator | Saturday 05 July 2025 23:28:03 +0000 (0:00:00.269) 0:00:03.741 ********* 2025-07-05 23:28:15.047475 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:28:15.047492 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:28:15.047509 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:28:15.047554 | orchestrator | 2025-07-05 23:28:15.047572 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-05 23:28:15.047588 | orchestrator | Saturday 05 July 2025 23:28:04 +0000 (0:00:00.450) 0:00:04.192 ********* 2025-07-05 23:28:15.047604 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:28:15.047620 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:28:15.047636 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:28:15.047652 | orchestrator | 2025-07-05 23:28:15.047717 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-07-05 23:28:15.047734 | orchestrator | Saturday 05 July 2025 23:28:04 +0000 (0:00:00.295) 0:00:04.487 ********* 2025-07-05 23:28:15.047750 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:28:15.047766 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:28:15.047781 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:28:15.047796 | orchestrator | 2025-07-05 23:28:15.047812 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-07-05 23:28:15.047827 | orchestrator | Saturday 05 July 2025 23:28:04 +0000 (0:00:00.283) 0:00:04.770 ********* 2025-07-05 23:28:15.047841 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:28:15.047856 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:28:15.047871 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:28:15.047886 | orchestrator | 2025-07-05 23:28:15.047902 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-05 23:28:15.047919 | orchestrator | Saturday 05 July 2025 23:28:05 +0000 (0:00:00.320) 0:00:05.091 ********* 2025-07-05 23:28:15.047934 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:28:15.047964 | orchestrator | 2025-07-05 23:28:15.047980 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-05 23:28:15.047996 | orchestrator | Saturday 05 July 2025 23:28:05 +0000 (0:00:00.650) 0:00:05.741 ********* 2025-07-05 23:28:15.048012 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:28:15.048029 | orchestrator | 2025-07-05 23:28:15.048044 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-05 23:28:15.048061 | orchestrator | Saturday 05 July 2025 23:28:06 +0000 (0:00:00.253) 0:00:05.994 ********* 2025-07-05 23:28:15.048076 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:28:15.048092 | orchestrator | 2025-07-05 23:28:15.048107 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:15.048123 | orchestrator | Saturday 05 July 2025 23:28:06 +0000 (0:00:00.244) 0:00:06.239 ********* 2025-07-05 23:28:15.048139 | orchestrator | 2025-07-05 23:28:15.048155 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:15.048171 | orchestrator | Saturday 05 July 2025 23:28:06 +0000 (0:00:00.071) 0:00:06.311 ********* 2025-07-05 23:28:15.048186 | orchestrator | 2025-07-05 23:28:15.048202 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:15.048218 | orchestrator | Saturday 05 July 2025 23:28:06 +0000 (0:00:00.072) 0:00:06.383 ********* 2025-07-05 23:28:15.048234 | orchestrator | 2025-07-05 23:28:15.048249 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-05 23:28:15.048266 | orchestrator | Saturday 05 July 2025 23:28:06 +0000 (0:00:00.072) 0:00:06.456 ********* 2025-07-05 23:28:15.048281 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:28:15.048297 | orchestrator | 2025-07-05 23:28:15.048372 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-05 23:28:15.048390 | orchestrator | Saturday 05 July 2025 23:28:06 +0000 (0:00:00.263) 0:00:06.719 ********* 2025-07-05 23:28:15.048405 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:28:15.048421 | orchestrator | 2025-07-05 23:28:15.048463 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-07-05 23:28:15.048480 | orchestrator | Saturday 05 July 2025 23:28:07 +0000 (0:00:00.287) 0:00:07.007 ********* 2025-07-05 23:28:15.048495 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:28:15.048511 | orchestrator | 2025-07-05 23:28:15.048548 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-07-05 23:28:15.048564 | orchestrator | Saturday 05 July 2025 23:28:07 +0000 (0:00:00.115) 0:00:07.123 ********* 2025-07-05 23:28:15.048580 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:28:15.048596 | orchestrator | 2025-07-05 23:28:15.048611 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-07-05 23:28:15.048628 | orchestrator | Saturday 05 July 2025 23:28:09 +0000 (0:00:02.015) 0:00:09.138 ********* 2025-07-05 23:28:15.048643 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:28:15.048659 | orchestrator | 2025-07-05 23:28:15.048675 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-07-05 23:28:15.048690 | orchestrator | Saturday 05 July 2025 23:28:09 +0000 (0:00:00.243) 0:00:09.382 ********* 2025-07-05 23:28:15.048706 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:28:15.048721 | orchestrator | 2025-07-05 23:28:15.048737 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-07-05 23:28:15.048752 | orchestrator | Saturday 05 July 2025 23:28:10 +0000 (0:00:00.699) 0:00:10.081 ********* 2025-07-05 23:28:15.048767 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:28:15.048782 | orchestrator | 2025-07-05 23:28:15.048799 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-07-05 23:28:15.048815 | orchestrator | Saturday 05 July 2025 23:28:10 +0000 (0:00:00.139) 0:00:10.221 ********* 2025-07-05 23:28:15.048830 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:28:15.048845 | orchestrator | 2025-07-05 23:28:15.048862 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-05 23:28:15.048879 | orchestrator | Saturday 05 July 2025 23:28:10 +0000 (0:00:00.140) 0:00:10.362 ********* 2025-07-05 23:28:15.048908 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:15.048924 | orchestrator | 2025-07-05 23:28:15.048940 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-05 23:28:15.048956 | orchestrator | Saturday 05 July 2025 23:28:10 +0000 (0:00:00.257) 0:00:10.620 ********* 2025-07-05 23:28:15.048972 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:28:15.048987 | orchestrator | 2025-07-05 23:28:15.049003 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-05 23:28:15.049019 | orchestrator | Saturday 05 July 2025 23:28:11 +0000 (0:00:00.270) 0:00:10.890 ********* 2025-07-05 23:28:15.049034 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:15.049050 | orchestrator | 2025-07-05 23:28:15.049066 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-05 23:28:15.049082 | orchestrator | Saturday 05 July 2025 23:28:12 +0000 (0:00:01.242) 0:00:12.132 ********* 2025-07-05 23:28:15.049097 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:15.049113 | orchestrator | 2025-07-05 23:28:15.049129 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-05 23:28:15.049146 | orchestrator | Saturday 05 July 2025 23:28:12 +0000 (0:00:00.294) 0:00:12.427 ********* 2025-07-05 23:28:15.049162 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:15.049178 | orchestrator | 2025-07-05 23:28:15.049194 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:15.049208 | orchestrator | Saturday 05 July 2025 23:28:12 +0000 (0:00:00.245) 0:00:12.672 ********* 2025-07-05 23:28:15.049224 | orchestrator | 2025-07-05 23:28:15.049240 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:15.049254 | orchestrator | Saturday 05 July 2025 23:28:12 +0000 (0:00:00.070) 0:00:12.743 ********* 2025-07-05 23:28:15.049269 | orchestrator | 2025-07-05 23:28:15.049286 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:15.049301 | orchestrator | Saturday 05 July 2025 23:28:12 +0000 (0:00:00.066) 0:00:12.810 ********* 2025-07-05 23:28:15.049317 | orchestrator | 2025-07-05 23:28:15.049333 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-05 23:28:15.049349 | orchestrator | Saturday 05 July 2025 23:28:12 +0000 (0:00:00.071) 0:00:12.881 ********* 2025-07-05 23:28:15.049365 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:15.049380 | orchestrator | 2025-07-05 23:28:15.049396 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-05 23:28:15.049412 | orchestrator | Saturday 05 July 2025 23:28:14 +0000 (0:00:01.620) 0:00:14.501 ********* 2025-07-05 23:28:15.049428 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-05 23:28:15.049445 | orchestrator |  "msg": [ 2025-07-05 23:28:15.049461 | orchestrator |  "Validator run completed.", 2025-07-05 23:28:15.049477 | orchestrator |  "You can find the report file here:", 2025-07-05 23:28:15.049493 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-07-05T23:28:01+00:00-report.json", 2025-07-05 23:28:15.049511 | orchestrator |  "on the following host:", 2025-07-05 23:28:15.049585 | orchestrator |  "testbed-manager" 2025-07-05 23:28:15.049602 | orchestrator |  ] 2025-07-05 23:28:15.049618 | orchestrator | } 2025-07-05 23:28:15.049634 | orchestrator | 2025-07-05 23:28:15.049649 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:28:15.049667 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-05 23:28:15.049684 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:28:15.049714 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:28:15.331346 | orchestrator | 2025-07-05 23:28:15.331443 | orchestrator | 2025-07-05 23:28:15.331457 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:28:15.331471 | orchestrator | Saturday 05 July 2025 23:28:15 +0000 (0:00:00.410) 0:00:14.912 ********* 2025-07-05 23:28:15.331482 | orchestrator | =============================================================================== 2025-07-05 23:28:15.331493 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.02s 2025-07-05 23:28:15.331503 | orchestrator | Write report file ------------------------------------------------------- 1.62s 2025-07-05 23:28:15.331513 | orchestrator | Aggregate test results step one ----------------------------------------- 1.24s 2025-07-05 23:28:15.331566 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-07-05 23:28:15.331579 | orchestrator | Create report output directory ------------------------------------------ 0.79s 2025-07-05 23:28:15.331589 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.70s 2025-07-05 23:28:15.331598 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-07-05 23:28:15.331608 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-07-05 23:28:15.331639 | orchestrator | Set test result to passed if container is existing ---------------------- 0.45s 2025-07-05 23:28:15.331649 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-07-05 23:28:15.331659 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2025-07-05 23:28:15.331668 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-07-05 23:28:15.331678 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-07-05 23:28:15.331687 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2025-07-05 23:28:15.331697 | orchestrator | Fail due to missing containers ------------------------------------------ 0.29s 2025-07-05 23:28:15.331706 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2025-07-05 23:28:15.331716 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.27s 2025-07-05 23:28:15.331725 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2025-07-05 23:28:15.331735 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-07-05 23:28:15.331745 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2025-07-05 23:28:15.588350 | orchestrator | + osism validate ceph-osds 2025-07-05 23:28:34.762341 | orchestrator | 2025-07-05 23:28:34.762491 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-07-05 23:28:34.762585 | orchestrator | 2025-07-05 23:28:34.762610 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-05 23:28:34.762630 | orchestrator | Saturday 05 July 2025 23:28:30 +0000 (0:00:00.435) 0:00:00.436 ********* 2025-07-05 23:28:34.762648 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:34.762666 | orchestrator | 2025-07-05 23:28:34.762684 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-05 23:28:34.762703 | orchestrator | Saturday 05 July 2025 23:28:31 +0000 (0:00:00.638) 0:00:01.074 ********* 2025-07-05 23:28:34.762721 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:34.762741 | orchestrator | 2025-07-05 23:28:34.762761 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-05 23:28:34.762781 | orchestrator | Saturday 05 July 2025 23:28:31 +0000 (0:00:00.219) 0:00:01.294 ********* 2025-07-05 23:28:34.762800 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:34.762817 | orchestrator | 2025-07-05 23:28:34.762838 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-05 23:28:34.762893 | orchestrator | Saturday 05 July 2025 23:28:32 +0000 (0:00:00.911) 0:00:02.205 ********* 2025-07-05 23:28:34.762907 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:34.762921 | orchestrator | 2025-07-05 23:28:34.762934 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-05 23:28:34.762948 | orchestrator | Saturday 05 July 2025 23:28:32 +0000 (0:00:00.138) 0:00:02.344 ********* 2025-07-05 23:28:34.762961 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:34.762974 | orchestrator | 2025-07-05 23:28:34.762987 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-05 23:28:34.763000 | orchestrator | Saturday 05 July 2025 23:28:32 +0000 (0:00:00.120) 0:00:02.464 ********* 2025-07-05 23:28:34.763012 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:34.763025 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:28:34.763039 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:28:34.763056 | orchestrator | 2025-07-05 23:28:34.763074 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-05 23:28:34.763087 | orchestrator | Saturday 05 July 2025 23:28:33 +0000 (0:00:00.347) 0:00:02.812 ********* 2025-07-05 23:28:34.763100 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:34.763112 | orchestrator | 2025-07-05 23:28:34.763124 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-05 23:28:34.763137 | orchestrator | Saturday 05 July 2025 23:28:33 +0000 (0:00:00.141) 0:00:02.954 ********* 2025-07-05 23:28:34.763149 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:34.763162 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:34.763174 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:34.763187 | orchestrator | 2025-07-05 23:28:34.763200 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-07-05 23:28:34.763228 | orchestrator | Saturday 05 July 2025 23:28:33 +0000 (0:00:00.310) 0:00:03.264 ********* 2025-07-05 23:28:34.763239 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:34.763250 | orchestrator | 2025-07-05 23:28:34.763261 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-05 23:28:34.763272 | orchestrator | Saturday 05 July 2025 23:28:34 +0000 (0:00:00.521) 0:00:03.786 ********* 2025-07-05 23:28:34.763283 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:34.763294 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:34.763304 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:34.763315 | orchestrator | 2025-07-05 23:28:34.763326 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-07-05 23:28:34.763337 | orchestrator | Saturday 05 July 2025 23:28:34 +0000 (0:00:00.440) 0:00:04.227 ********* 2025-07-05 23:28:34.763350 | orchestrator | skipping: [testbed-node-3] => (item={'id': '173f33aa72486b3c489c237561bc1e7ce107f547374554cef4f2d2bcb8cb91ec', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-07-05 23:28:34.763364 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b3501a00aafc846fdc930178fdad3a3d0c53e6b19f2f759a32e0393c64d1793c', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-05 23:28:34.763377 | orchestrator | skipping: [testbed-node-3] => (item={'id': '67336fe3de84a8c5ab8d39cc906ea4d9bc6f31323244694ace18d30aa297dad8', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-05 23:28:34.763391 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd3490cc67fe77114be691be3626da89584682d9669119f5ac290a6f1faf47ebd', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-05 23:28:34.763410 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8e5ccd858ab35370e5d79bf2d265754c9a210c3c11b5b73739425fa23d1eb98b', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-05 23:28:34.763453 | orchestrator | skipping: [testbed-node-3] => (item={'id': '03c281b74f9532f9333f7400f410aba9ad72ecb8193c94eedec86443b08430d0', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-05 23:28:34.763465 | orchestrator | skipping: [testbed-node-3] => (item={'id': '77b7ee8bd874b2a360b7e936026d8d1eb588e9f0402360700fc5808b87fdb988', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-05 23:28:34.763477 | orchestrator | skipping: [testbed-node-3] => (item={'id': '13b96e1121a9a67518f89d49258bb0653621d120781addcaa0beef7e8f5318f5', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-05 23:28:34.763488 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd8821f2db7e0914731bdf48c51f637ca84d84b41bda3ecddbe2dfd2e803fb4e1', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-05 23:28:34.763503 | orchestrator | skipping: [testbed-node-3] => (item={'id': '81a3289dd79dc60d3f0cd10f81873cc548997e5cf899c83e91e47de753a643f5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-07-05 23:28:34.763539 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8d8468ff74bd2b26fdd4be1114696b8cf6cae07c17799a78d3c95aa787cdb4d5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-05 23:28:34.763551 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a21b4f5bf5471d0ce737e3444084052259f3b1fe5cea77c228d09d7145d09886', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-05 23:28:34.763563 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c514b308ba7ba3dccb43aa1840a0d7f979e7f2f9fc3d41f587be17f44703b54b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-07-05 23:28:34.763580 | orchestrator | ok: [testbed-node-3] => (item={'id': '9e3a74db542440a673b781c24d93c999a2409fe1bbeb96e1ee554b46ebe9a4cb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-07-05 23:28:34.763592 | orchestrator | skipping: [testbed-node-3] => (item={'id': '900fbf25f7f7fcdd691f89d84c2ce3b29ec75443ab77c566328e4eef3e81ed29', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-07-05 23:28:34.763603 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3dd7b0c9ce1eac7e10339e578bf497653ed9d4b57a7071a960648123464fdf85', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-05 23:28:34.763614 | orchestrator | skipping: [testbed-node-3] => (item={'id': '86535fe9a38ee7271c265b4aeca6e349e5b27d2ce24b989e3c379237c56cc83b', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-05 23:28:34.763626 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aabf5c0d2fb7e6fe5fd99a048c31be6792921eb44c45c06cca5cbd183db5f382', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-05 23:28:34.763637 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3e2861ca2f50634d4385711e8eb4fb6e975657a4ac7b83ba245530ffb497ff93', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-05 23:28:34.763655 | orchestrator | skipping: [testbed-node-3] => (item={'id': '387320eb19bba564f2299b24e9ee623ae83acc9610c4dd7c46f5a38a738c273d', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-05 23:28:34.763672 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c7144c0194785c50584a76aa36e46c2e5c5774f410b4397af3716779c569f0bf', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-07-05 23:28:34.763701 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1d62ddcc97d78d1e69d6551e00cfa0c1eeeabec23d7b702b5087ee36aaf34e19', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-05 23:28:34.993959 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2c5607d46c669701afd7469d64bd7bee9183384886644532be247eb147240d69', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-05 23:28:34.994114 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a0cd022c7d2b49224fb5d744e68951d457cff33b70ceb9fab52e795d4744db3', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-05 23:28:34.994133 | orchestrator | skipping: [testbed-node-4] => (item={'id': '169d5241fa92aebc460cb65c05e1c406e7acaabfb78d5fd682589fd9938cc0d7', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-05 23:28:34.994148 | orchestrator | skipping: [testbed-node-4] => (item={'id': '958244e1245f5402d29ad703a7e09e29aa03e28526a6cc3c780b65051f25d872', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-05 23:28:34.994159 | orchestrator | skipping: [testbed-node-4] => (item={'id': '18246b1d1ba3126d1f93c98b12191dc1feb591080de48291ffc6fcdc52635494', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-05 23:28:34.994170 | orchestrator | skipping: [testbed-node-4] => (item={'id': '444284d942bc25128b372d28f3cafd0bb2147914a964238c5f0122bff994761f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-05 23:28:34.994182 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a41e8c165c117b97f4ad8823ccc515d94ff4d71e8a6f79577c063300642a745e', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-05 23:28:34.994193 | orchestrator | skipping: [testbed-node-4] => (item={'id': '49a2a7134bb0d24751cd151e55757517a75a073a0a2514b4bbf5e2ce189cced6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-07-05 23:28:34.994204 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4085a6b6872586024d6fc97f14792388e8e12edff325950c84f31f10b67d140c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-05 23:28:34.994215 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3347f1c673608758bad08afa78669ddd725283b51a592b85f92a232e59afd3cc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-05 23:28:34.994246 | orchestrator | ok: [testbed-node-4] => (item={'id': '8680f34f05317ef1cae0cd27ca3009e3db33397f3bf2e7881665b7fb829cfbc4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-07-05 23:28:34.994278 | orchestrator | ok: [testbed-node-4] => (item={'id': 'f2e80722b64a09c6cac6e740023e3779025b64dff642215351e5125ef3fce371', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-07-05 23:28:34.994290 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8173299ff301b2e1f1b7124ddfc65cf18fa3ceaaac234abc31fe4e6dae618657', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-07-05 23:28:34.994301 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e022f2c40c24bc880416d902e5379a0ed118439ce9b67603f5765545fbe0149f', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-05 23:28:34.994312 | orchestrator | skipping: [testbed-node-4] => (item={'id': '258df65a64954bbbc13cb8887be908fb0afe219f5594ce0a707be5660507443e', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-05 23:28:34.994341 | orchestrator | skipping: [testbed-node-4] => (item={'id': '41dd9f8ca8dabfa979b1cceaca81d3dbe1fd7641f5656bb517725c3f8de505c4', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-05 23:28:34.994353 | orchestrator | skipping: [testbed-node-4] => (item={'id': '88b0b8a1dad665dc5fd63e142431dcb5a71a4fa500a4632d24ba5f69e411ceaa', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-05 23:28:34.994364 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e4ccec1d81e205da0bc6b0de86cd252e1db6cf5f42ab49b3469fdcd318253caa', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-05 23:28:34.994375 | orchestrator | skipping: [testbed-node-5] => (item={'id': '636c260d41c721d2a1dae5dddcdf68f542309785514b16b7dbf0fa73336a25de', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-07-05 23:28:34.994386 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fba41b25b8ea56e8c5aa8ebf10d583293484a760dc4e1730e67c642dff7a421d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-05 23:28:34.994397 | orchestrator | skipping: [testbed-node-5] => (item={'id': '094ad8407c7505968cc5e63a97aa16411ba669fb9cce7c415854c72483fc5520', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-05 23:28:34.994409 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9af5e7bf4770211b0837b65e65399acbd79023fab96f908aa73abb2a219eadbd', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-05 23:28:34.994426 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e2b69f10a688cda4b0dcd8687e86d2aef9fe75a8e74a740f82525fd67734aa15', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-05 23:28:34.994437 | orchestrator | skipping: [testbed-node-5] => (item={'id': '25c9227776957b423b40678ff378b5fba826a1b5b64598a9f0999190b16e6bf9', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-05 23:28:34.994448 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7f2275f2b693cb3d913254dbae6822b5222ccf9adae478a4030233628417ff46', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-05 23:28:34.994467 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f19456e553162d7d729fa54dd6fb20424ddee356696aa70b0b55be487dc33f05', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-05 23:28:34.994478 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dc96601a96612bbb022c1bf49bd747200fd620cb53955e3e9f0e427def09c15d', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-05 23:28:34.994489 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4f58a3510a698d73444b1762303a3dd8d3902bc3acae46af3c72a25902cf7a60', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-07-05 23:28:34.994500 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b4ce07c72f3b7e922be9221d1f7c80f3242b623f4cab7d5beb82153aa3982a02', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-05 23:28:34.994511 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8a0d96da9f2d395e5dbcabf77afdfb3a3b5c5e4ef0a212155c0c9e06a8070944', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-05 23:28:34.994575 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ee76d8fdbabcae4862241cb7b569ac50a115f4d602f4c0748d2199052482f6f2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-07-05 23:28:42.786881 | orchestrator | ok: [testbed-node-5] => (item={'id': '84782127881a3ab99af5f126b4dccabf37cbb94b4b15b7fbf29ea781be084617', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-07-05 23:28:42.786975 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd44e82579272fe9c28f6f44d2639f3aadbeba97f3aa2c9c1f174924c278c4bc5', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-07-05 23:28:42.786986 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b1d4d3588968589da440f48dee071e6d62b27c0260cc590ef05101cfd1fc230e', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-05 23:28:42.786995 | orchestrator | skipping: [testbed-node-5] => (item={'id': '654859fb5f4b0cda8c14055c6701315e812a4181faa28d571f55da5c39ef98c7', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-05 23:28:42.787003 | orchestrator | skipping: [testbed-node-5] => (item={'id': '440dea7ae4a11bc7c03a9a4696d691596c980090a581328ecb4ed948a22fa0f0', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-05 23:28:42.787010 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bc82391dd0cb559998696e479f3a518d9b383448246c054cf74eb723637e111d', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-05 23:28:42.787018 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b6ea62b81c962444e302cbc3db16417e248f20e19f335e8a040bf64972ce838b', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-05 23:28:42.787025 | orchestrator | 2025-07-05 23:28:42.787045 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-07-05 23:28:42.787054 | orchestrator | Saturday 05 July 2025 23:28:34 +0000 (0:00:00.479) 0:00:04.706 ********* 2025-07-05 23:28:42.787061 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787085 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:42.787093 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:42.787099 | orchestrator | 2025-07-05 23:28:42.787106 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-07-05 23:28:42.787113 | orchestrator | Saturday 05 July 2025 23:28:35 +0000 (0:00:00.281) 0:00:04.988 ********* 2025-07-05 23:28:42.787120 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:42.787128 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:28:42.787135 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:28:42.787141 | orchestrator | 2025-07-05 23:28:42.787148 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-07-05 23:28:42.787155 | orchestrator | Saturday 05 July 2025 23:28:35 +0000 (0:00:00.280) 0:00:05.268 ********* 2025-07-05 23:28:42.787162 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787168 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:42.787175 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:42.787182 | orchestrator | 2025-07-05 23:28:42.787189 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-05 23:28:42.787196 | orchestrator | Saturday 05 July 2025 23:28:35 +0000 (0:00:00.442) 0:00:05.711 ********* 2025-07-05 23:28:42.787202 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787209 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:42.787216 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:42.787223 | orchestrator | 2025-07-05 23:28:42.787229 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-07-05 23:28:42.787236 | orchestrator | Saturday 05 July 2025 23:28:36 +0000 (0:00:00.281) 0:00:05.993 ********* 2025-07-05 23:28:42.787243 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-07-05 23:28:42.787251 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-07-05 23:28:42.787258 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:42.787265 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-07-05 23:28:42.787272 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-07-05 23:28:42.787278 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:28:42.787285 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-07-05 23:28:42.787292 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-07-05 23:28:42.787299 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:28:42.787306 | orchestrator | 2025-07-05 23:28:42.787312 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-07-05 23:28:42.787319 | orchestrator | Saturday 05 July 2025 23:28:36 +0000 (0:00:00.306) 0:00:06.299 ********* 2025-07-05 23:28:42.787326 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787333 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:42.787339 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:42.787346 | orchestrator | 2025-07-05 23:28:42.787364 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-05 23:28:42.787372 | orchestrator | Saturday 05 July 2025 23:28:36 +0000 (0:00:00.294) 0:00:06.594 ********* 2025-07-05 23:28:42.787378 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:42.787385 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:28:42.787392 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:28:42.787399 | orchestrator | 2025-07-05 23:28:42.787405 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-05 23:28:42.787412 | orchestrator | Saturday 05 July 2025 23:28:37 +0000 (0:00:00.439) 0:00:07.034 ********* 2025-07-05 23:28:42.787419 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:42.787426 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:28:42.787433 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:28:42.787440 | orchestrator | 2025-07-05 23:28:42.787454 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-07-05 23:28:42.787462 | orchestrator | Saturday 05 July 2025 23:28:37 +0000 (0:00:00.287) 0:00:07.321 ********* 2025-07-05 23:28:42.787470 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787477 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:42.787485 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:42.787492 | orchestrator | 2025-07-05 23:28:42.787500 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-05 23:28:42.787508 | orchestrator | Saturday 05 July 2025 23:28:37 +0000 (0:00:00.311) 0:00:07.633 ********* 2025-07-05 23:28:42.787539 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:42.787547 | orchestrator | 2025-07-05 23:28:42.787555 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-05 23:28:42.787563 | orchestrator | Saturday 05 July 2025 23:28:38 +0000 (0:00:00.283) 0:00:07.916 ********* 2025-07-05 23:28:42.787571 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:42.787579 | orchestrator | 2025-07-05 23:28:42.787587 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-05 23:28:42.787595 | orchestrator | Saturday 05 July 2025 23:28:38 +0000 (0:00:00.235) 0:00:08.152 ********* 2025-07-05 23:28:42.787602 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:42.787610 | orchestrator | 2025-07-05 23:28:42.787618 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:42.787626 | orchestrator | Saturday 05 July 2025 23:28:38 +0000 (0:00:00.234) 0:00:08.387 ********* 2025-07-05 23:28:42.787634 | orchestrator | 2025-07-05 23:28:42.787641 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:42.787649 | orchestrator | Saturday 05 July 2025 23:28:38 +0000 (0:00:00.063) 0:00:08.450 ********* 2025-07-05 23:28:42.787657 | orchestrator | 2025-07-05 23:28:42.787665 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:42.787673 | orchestrator | Saturday 05 July 2025 23:28:38 +0000 (0:00:00.060) 0:00:08.510 ********* 2025-07-05 23:28:42.787684 | orchestrator | 2025-07-05 23:28:42.787694 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-05 23:28:42.787702 | orchestrator | Saturday 05 July 2025 23:28:39 +0000 (0:00:00.219) 0:00:08.730 ********* 2025-07-05 23:28:42.787709 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:42.787717 | orchestrator | 2025-07-05 23:28:42.787725 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-07-05 23:28:42.787734 | orchestrator | Saturday 05 July 2025 23:28:39 +0000 (0:00:00.248) 0:00:08.979 ********* 2025-07-05 23:28:42.787742 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:42.787750 | orchestrator | 2025-07-05 23:28:42.787757 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-05 23:28:42.787765 | orchestrator | Saturday 05 July 2025 23:28:39 +0000 (0:00:00.241) 0:00:09.221 ********* 2025-07-05 23:28:42.787773 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787781 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:42.787789 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:42.787797 | orchestrator | 2025-07-05 23:28:42.787804 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-07-05 23:28:42.787810 | orchestrator | Saturday 05 July 2025 23:28:39 +0000 (0:00:00.304) 0:00:09.525 ********* 2025-07-05 23:28:42.787817 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787823 | orchestrator | 2025-07-05 23:28:42.787830 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-07-05 23:28:42.787836 | orchestrator | Saturday 05 July 2025 23:28:40 +0000 (0:00:00.228) 0:00:09.754 ********* 2025-07-05 23:28:42.787843 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-05 23:28:42.787850 | orchestrator | 2025-07-05 23:28:42.787856 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-07-05 23:28:42.787863 | orchestrator | Saturday 05 July 2025 23:28:41 +0000 (0:00:01.616) 0:00:11.370 ********* 2025-07-05 23:28:42.787874 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787881 | orchestrator | 2025-07-05 23:28:42.787888 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-07-05 23:28:42.787894 | orchestrator | Saturday 05 July 2025 23:28:41 +0000 (0:00:00.135) 0:00:11.505 ********* 2025-07-05 23:28:42.787901 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787908 | orchestrator | 2025-07-05 23:28:42.787914 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-07-05 23:28:42.787921 | orchestrator | Saturday 05 July 2025 23:28:42 +0000 (0:00:00.289) 0:00:11.794 ********* 2025-07-05 23:28:42.787927 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:42.787934 | orchestrator | 2025-07-05 23:28:42.787941 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-07-05 23:28:42.787947 | orchestrator | Saturday 05 July 2025 23:28:42 +0000 (0:00:00.133) 0:00:11.928 ********* 2025-07-05 23:28:42.787954 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787960 | orchestrator | 2025-07-05 23:28:42.787967 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-05 23:28:42.787974 | orchestrator | Saturday 05 July 2025 23:28:42 +0000 (0:00:00.108) 0:00:12.037 ********* 2025-07-05 23:28:42.787980 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:42.787987 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:42.787994 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:42.788000 | orchestrator | 2025-07-05 23:28:42.788007 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-07-05 23:28:42.788019 | orchestrator | Saturday 05 July 2025 23:28:42 +0000 (0:00:00.466) 0:00:12.503 ********* 2025-07-05 23:28:54.913062 | orchestrator | changed: [testbed-node-3] 2025-07-05 23:28:54.913177 | orchestrator | changed: [testbed-node-4] 2025-07-05 23:28:54.913241 | orchestrator | changed: [testbed-node-5] 2025-07-05 23:28:54.913256 | orchestrator | 2025-07-05 23:28:54.913269 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-07-05 23:28:54.913282 | orchestrator | Saturday 05 July 2025 23:28:45 +0000 (0:00:02.425) 0:00:14.928 ********* 2025-07-05 23:28:54.913293 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:54.913305 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:54.913316 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:54.913327 | orchestrator | 2025-07-05 23:28:54.913338 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-07-05 23:28:54.913350 | orchestrator | Saturday 05 July 2025 23:28:45 +0000 (0:00:00.290) 0:00:15.219 ********* 2025-07-05 23:28:54.913361 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:54.913372 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:54.913383 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:54.913394 | orchestrator | 2025-07-05 23:28:54.913405 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-07-05 23:28:54.913416 | orchestrator | Saturday 05 July 2025 23:28:45 +0000 (0:00:00.477) 0:00:15.697 ********* 2025-07-05 23:28:54.913428 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:54.913439 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:28:54.913450 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:28:54.913460 | orchestrator | 2025-07-05 23:28:54.913471 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-07-05 23:28:54.913484 | orchestrator | Saturday 05 July 2025 23:28:46 +0000 (0:00:00.492) 0:00:16.190 ********* 2025-07-05 23:28:54.913495 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:54.913536 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:54.913550 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:54.913561 | orchestrator | 2025-07-05 23:28:54.913572 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-07-05 23:28:54.913583 | orchestrator | Saturday 05 July 2025 23:28:46 +0000 (0:00:00.306) 0:00:16.496 ********* 2025-07-05 23:28:54.913595 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:54.913608 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:28:54.913622 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:28:54.913635 | orchestrator | 2025-07-05 23:28:54.913670 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-07-05 23:28:54.913685 | orchestrator | Saturday 05 July 2025 23:28:47 +0000 (0:00:00.279) 0:00:16.775 ********* 2025-07-05 23:28:54.913697 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:54.913710 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:28:54.913723 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:28:54.913735 | orchestrator | 2025-07-05 23:28:54.913748 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-05 23:28:54.913767 | orchestrator | Saturday 05 July 2025 23:28:47 +0000 (0:00:00.260) 0:00:17.036 ********* 2025-07-05 23:28:54.913780 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:54.913792 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:54.913805 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:54.913817 | orchestrator | 2025-07-05 23:28:54.913830 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-07-05 23:28:54.913843 | orchestrator | Saturday 05 July 2025 23:28:48 +0000 (0:00:00.708) 0:00:17.744 ********* 2025-07-05 23:28:54.913856 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:54.913869 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:54.913881 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:54.913893 | orchestrator | 2025-07-05 23:28:54.913906 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-07-05 23:28:54.913919 | orchestrator | Saturday 05 July 2025 23:28:48 +0000 (0:00:00.464) 0:00:18.209 ********* 2025-07-05 23:28:54.913932 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:54.913945 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:54.913958 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:54.913968 | orchestrator | 2025-07-05 23:28:54.913979 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-07-05 23:28:54.913990 | orchestrator | Saturday 05 July 2025 23:28:48 +0000 (0:00:00.348) 0:00:18.557 ********* 2025-07-05 23:28:54.914001 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:54.914012 | orchestrator | skipping: [testbed-node-4] 2025-07-05 23:28:54.914084 | orchestrator | skipping: [testbed-node-5] 2025-07-05 23:28:54.914096 | orchestrator | 2025-07-05 23:28:54.914107 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-07-05 23:28:54.914118 | orchestrator | Saturday 05 July 2025 23:28:49 +0000 (0:00:00.297) 0:00:18.854 ********* 2025-07-05 23:28:54.914129 | orchestrator | ok: [testbed-node-3] 2025-07-05 23:28:54.914140 | orchestrator | ok: [testbed-node-4] 2025-07-05 23:28:54.914151 | orchestrator | ok: [testbed-node-5] 2025-07-05 23:28:54.914161 | orchestrator | 2025-07-05 23:28:54.914180 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-05 23:28:54.914195 | orchestrator | Saturday 05 July 2025 23:28:49 +0000 (0:00:00.477) 0:00:19.332 ********* 2025-07-05 23:28:54.914207 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:54.914218 | orchestrator | 2025-07-05 23:28:54.914229 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-05 23:28:54.914240 | orchestrator | Saturday 05 July 2025 23:28:49 +0000 (0:00:00.266) 0:00:19.598 ********* 2025-07-05 23:28:54.914251 | orchestrator | skipping: [testbed-node-3] 2025-07-05 23:28:54.914261 | orchestrator | 2025-07-05 23:28:54.914272 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-05 23:28:54.914283 | orchestrator | Saturday 05 July 2025 23:28:50 +0000 (0:00:00.246) 0:00:19.845 ********* 2025-07-05 23:28:54.914294 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:54.914305 | orchestrator | 2025-07-05 23:28:54.914316 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-05 23:28:54.914326 | orchestrator | Saturday 05 July 2025 23:28:51 +0000 (0:00:01.560) 0:00:21.405 ********* 2025-07-05 23:28:54.914337 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:54.914347 | orchestrator | 2025-07-05 23:28:54.914358 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-05 23:28:54.914378 | orchestrator | Saturday 05 July 2025 23:28:51 +0000 (0:00:00.243) 0:00:21.649 ********* 2025-07-05 23:28:54.914409 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:54.914421 | orchestrator | 2025-07-05 23:28:54.914432 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:54.914443 | orchestrator | Saturday 05 July 2025 23:28:52 +0000 (0:00:00.244) 0:00:21.893 ********* 2025-07-05 23:28:54.914454 | orchestrator | 2025-07-05 23:28:54.914464 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:54.914475 | orchestrator | Saturday 05 July 2025 23:28:52 +0000 (0:00:00.066) 0:00:21.959 ********* 2025-07-05 23:28:54.914486 | orchestrator | 2025-07-05 23:28:54.914497 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-05 23:28:54.914530 | orchestrator | Saturday 05 July 2025 23:28:52 +0000 (0:00:00.076) 0:00:22.036 ********* 2025-07-05 23:28:54.914543 | orchestrator | 2025-07-05 23:28:54.914554 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-05 23:28:54.914564 | orchestrator | Saturday 05 July 2025 23:28:52 +0000 (0:00:00.067) 0:00:22.103 ********* 2025-07-05 23:28:54.914575 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-05 23:28:54.914585 | orchestrator | 2025-07-05 23:28:54.914596 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-05 23:28:54.914607 | orchestrator | Saturday 05 July 2025 23:28:53 +0000 (0:00:01.473) 0:00:23.576 ********* 2025-07-05 23:28:54.914617 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-07-05 23:28:54.914628 | orchestrator |  "msg": [ 2025-07-05 23:28:54.914640 | orchestrator |  "Validator run completed.", 2025-07-05 23:28:54.914651 | orchestrator |  "You can find the report file here:", 2025-07-05 23:28:54.914663 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-07-05T23:28:31+00:00-report.json", 2025-07-05 23:28:54.914675 | orchestrator |  "on the following host:", 2025-07-05 23:28:54.914687 | orchestrator |  "testbed-manager" 2025-07-05 23:28:54.914698 | orchestrator |  ] 2025-07-05 23:28:54.914709 | orchestrator | } 2025-07-05 23:28:54.914720 | orchestrator | 2025-07-05 23:28:54.914731 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:28:54.914743 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-07-05 23:28:54.914755 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-05 23:28:54.914772 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-05 23:28:54.914784 | orchestrator | 2025-07-05 23:28:54.914795 | orchestrator | 2025-07-05 23:28:54.914806 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:28:54.914816 | orchestrator | Saturday 05 July 2025 23:28:54 +0000 (0:00:00.775) 0:00:24.351 ********* 2025-07-05 23:28:54.914827 | orchestrator | =============================================================================== 2025-07-05 23:28:54.914838 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.43s 2025-07-05 23:28:54.914849 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.62s 2025-07-05 23:28:54.914859 | orchestrator | Aggregate test results step one ----------------------------------------- 1.56s 2025-07-05 23:28:54.914870 | orchestrator | Write report file ------------------------------------------------------- 1.47s 2025-07-05 23:28:54.914880 | orchestrator | Create report output directory ------------------------------------------ 0.91s 2025-07-05 23:28:54.914891 | orchestrator | Print report file information ------------------------------------------- 0.78s 2025-07-05 23:28:54.914904 | orchestrator | Prepare test data ------------------------------------------------------- 0.71s 2025-07-05 23:28:54.914929 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-07-05 23:28:54.914941 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.52s 2025-07-05 23:28:54.914952 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.49s 2025-07-05 23:28:54.914962 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.48s 2025-07-05 23:28:54.914973 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2025-07-05 23:28:54.914984 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.48s 2025-07-05 23:28:54.914994 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-07-05 23:28:54.915005 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.46s 2025-07-05 23:28:54.915016 | orchestrator | Set test result to passed if count matches ------------------------------ 0.44s 2025-07-05 23:28:54.915026 | orchestrator | Prepare test data ------------------------------------------------------- 0.44s 2025-07-05 23:28:54.915037 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.44s 2025-07-05 23:28:54.915048 | orchestrator | Calculate sub test expression results ----------------------------------- 0.35s 2025-07-05 23:28:54.915059 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.35s 2025-07-05 23:28:55.212574 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-07-05 23:28:55.217399 | orchestrator | + set -e 2025-07-05 23:28:55.217489 | orchestrator | + source /opt/manager-vars.sh 2025-07-05 23:28:55.217505 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-05 23:28:55.217553 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-05 23:28:55.217564 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-05 23:28:55.217575 | orchestrator | ++ CEPH_VERSION=reef 2025-07-05 23:28:55.217587 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-05 23:28:55.217599 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-05 23:28:55.217616 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-05 23:28:55.217635 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-05 23:28:55.217652 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-05 23:28:55.217670 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-05 23:28:55.217687 | orchestrator | ++ export ARA=false 2025-07-05 23:28:55.217706 | orchestrator | ++ ARA=false 2025-07-05 23:28:55.217721 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-05 23:28:55.217737 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-05 23:28:55.217755 | orchestrator | ++ export TEMPEST=false 2025-07-05 23:28:55.217773 | orchestrator | ++ TEMPEST=false 2025-07-05 23:28:55.217792 | orchestrator | ++ export IS_ZUUL=true 2025-07-05 23:28:55.217812 | orchestrator | ++ IS_ZUUL=true 2025-07-05 23:28:55.217831 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 23:28:55.217844 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.94 2025-07-05 23:28:55.217855 | orchestrator | ++ export EXTERNAL_API=false 2025-07-05 23:28:55.217866 | orchestrator | ++ EXTERNAL_API=false 2025-07-05 23:28:55.217877 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-05 23:28:55.217888 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-05 23:28:55.217899 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-05 23:28:55.217910 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-05 23:28:55.217921 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-05 23:28:55.217931 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-05 23:28:55.217942 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-05 23:28:55.217953 | orchestrator | + source /etc/os-release 2025-07-05 23:28:55.217964 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-07-05 23:28:55.217975 | orchestrator | ++ NAME=Ubuntu 2025-07-05 23:28:55.217987 | orchestrator | ++ VERSION_ID=24.04 2025-07-05 23:28:55.218001 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-07-05 23:28:55.218013 | orchestrator | ++ VERSION_CODENAME=noble 2025-07-05 23:28:55.218081 | orchestrator | ++ ID=ubuntu 2025-07-05 23:28:55.218093 | orchestrator | ++ ID_LIKE=debian 2025-07-05 23:28:55.218105 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-07-05 23:28:55.218118 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-07-05 23:28:55.218130 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-07-05 23:28:55.218143 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-07-05 23:28:55.218157 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-07-05 23:28:55.218170 | orchestrator | ++ LOGO=ubuntu-logo 2025-07-05 23:28:55.218209 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-07-05 23:28:55.218224 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-07-05 23:28:55.218238 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-05 23:28:55.241805 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-05 23:29:13.658466 | orchestrator | 2025-07-05 23:29:13.658656 | orchestrator | # Status of Elasticsearch 2025-07-05 23:29:13.658677 | orchestrator | 2025-07-05 23:29:13.658690 | orchestrator | + pushd /opt/configuration/contrib 2025-07-05 23:29:13.658703 | orchestrator | + echo 2025-07-05 23:29:13.658715 | orchestrator | + echo '# Status of Elasticsearch' 2025-07-05 23:29:13.658782 | orchestrator | + echo 2025-07-05 23:29:13.658797 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-07-05 23:29:13.819333 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-07-05 23:29:13.819433 | orchestrator | 2025-07-05 23:29:13.819449 | orchestrator | # Status of MariaDB 2025-07-05 23:29:13.819462 | orchestrator | 2025-07-05 23:29:13.819473 | orchestrator | + echo 2025-07-05 23:29:13.819485 | orchestrator | + echo '# Status of MariaDB' 2025-07-05 23:29:13.819497 | orchestrator | + echo 2025-07-05 23:29:13.819508 | orchestrator | + MARIADB_USER=root_shard_0 2025-07-05 23:29:13.819568 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-07-05 23:29:13.884034 | orchestrator | Reading package lists... 2025-07-05 23:29:14.118268 | orchestrator | Building dependency tree... 2025-07-05 23:29:14.118623 | orchestrator | Reading state information... 2025-07-05 23:29:14.409097 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-07-05 23:29:14.409200 | orchestrator | bc set to manually installed. 2025-07-05 23:29:14.409215 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-07-05 23:29:15.078995 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-07-05 23:29:15.079881 | orchestrator | 2025-07-05 23:29:15.079915 | orchestrator | # Status of Prometheus 2025-07-05 23:29:15.079929 | orchestrator | 2025-07-05 23:29:15.079941 | orchestrator | + echo 2025-07-05 23:29:15.079953 | orchestrator | + echo '# Status of Prometheus' 2025-07-05 23:29:15.079965 | orchestrator | + echo 2025-07-05 23:29:15.079976 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-07-05 23:29:15.130350 | orchestrator | Unauthorized 2025-07-05 23:29:15.134199 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-07-05 23:29:15.195286 | orchestrator | Unauthorized 2025-07-05 23:29:15.198582 | orchestrator | 2025-07-05 23:29:15.198624 | orchestrator | # Status of RabbitMQ 2025-07-05 23:29:15.198638 | orchestrator | 2025-07-05 23:29:15.198650 | orchestrator | + echo 2025-07-05 23:29:15.198661 | orchestrator | + echo '# Status of RabbitMQ' 2025-07-05 23:29:15.198672 | orchestrator | + echo 2025-07-05 23:29:15.198684 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-07-05 23:29:15.674257 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-07-05 23:29:15.683040 | orchestrator | 2025-07-05 23:29:15.683093 | orchestrator | # Status of Redis 2025-07-05 23:29:15.683107 | orchestrator | 2025-07-05 23:29:15.683118 | orchestrator | + echo 2025-07-05 23:29:15.683130 | orchestrator | + echo '# Status of Redis' 2025-07-05 23:29:15.683142 | orchestrator | + echo 2025-07-05 23:29:15.683155 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-07-05 23:29:15.687251 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001318s;;;0.000000;10.000000 2025-07-05 23:29:15.687768 | orchestrator | + popd 2025-07-05 23:29:15.687791 | orchestrator | 2025-07-05 23:29:15.687803 | orchestrator | + echo 2025-07-05 23:29:15.687815 | orchestrator | # Create backup of MariaDB database 2025-07-05 23:29:15.687827 | orchestrator | 2025-07-05 23:29:15.687838 | orchestrator | + echo '# Create backup of MariaDB database' 2025-07-05 23:29:15.687850 | orchestrator | + echo 2025-07-05 23:29:15.687898 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-07-05 23:29:17.530933 | orchestrator | 2025-07-05 23:29:17 | INFO  | Task 549042c6-e3aa-43cd-af12-d003d6f7982d (mariadb_backup) was prepared for execution. 2025-07-05 23:29:17.531034 | orchestrator | 2025-07-05 23:29:17 | INFO  | It takes a moment until task 549042c6-e3aa-43cd-af12-d003d6f7982d (mariadb_backup) has been started and output is visible here. 2025-07-05 23:32:18.326296 | orchestrator | 2025-07-05 23:32:18.326408 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-05 23:32:18.326424 | orchestrator | 2025-07-05 23:32:18.326434 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-05 23:32:18.326444 | orchestrator | Saturday 05 July 2025 23:29:21 +0000 (0:00:00.173) 0:00:00.173 ********* 2025-07-05 23:32:18.326453 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:32:18.326463 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:32:18.326472 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:32:18.326481 | orchestrator | 2025-07-05 23:32:18.326490 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-05 23:32:18.326499 | orchestrator | Saturday 05 July 2025 23:29:21 +0000 (0:00:00.314) 0:00:00.488 ********* 2025-07-05 23:32:18.326508 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-05 23:32:18.326517 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-05 23:32:18.326526 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-05 23:32:18.326535 | orchestrator | 2025-07-05 23:32:18.326544 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-05 23:32:18.326616 | orchestrator | 2025-07-05 23:32:18.326628 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-05 23:32:18.326637 | orchestrator | Saturday 05 July 2025 23:29:22 +0000 (0:00:00.547) 0:00:01.035 ********* 2025-07-05 23:32:18.326646 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-05 23:32:18.326655 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-05 23:32:18.326664 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-05 23:32:18.326673 | orchestrator | 2025-07-05 23:32:18.326681 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-05 23:32:18.326690 | orchestrator | Saturday 05 July 2025 23:29:22 +0000 (0:00:00.389) 0:00:01.425 ********* 2025-07-05 23:32:18.326699 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-05 23:32:18.326709 | orchestrator | 2025-07-05 23:32:18.326718 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-07-05 23:32:18.326727 | orchestrator | Saturday 05 July 2025 23:29:23 +0000 (0:00:00.540) 0:00:01.965 ********* 2025-07-05 23:32:18.326735 | orchestrator | ok: [testbed-node-0] 2025-07-05 23:32:18.326744 | orchestrator | ok: [testbed-node-1] 2025-07-05 23:32:18.326753 | orchestrator | ok: [testbed-node-2] 2025-07-05 23:32:18.326761 | orchestrator | 2025-07-05 23:32:18.326770 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-07-05 23:32:18.326779 | orchestrator | Saturday 05 July 2025 23:29:26 +0000 (0:00:03.090) 0:00:05.055 ********* 2025-07-05 23:32:18.326787 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:32:18.326797 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:32:18.326805 | orchestrator | 2025-07-05 23:32:18.326814 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2025-07-05 23:32:18.326823 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-05 23:32:18.326831 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-07-05 23:32:18.326840 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-05 23:32:18.326850 | orchestrator | mariadb_bootstrap_restart 2025-07-05 23:32:18.326861 | orchestrator | changed: [testbed-node-0] 2025-07-05 23:32:18.326871 | orchestrator | 2025-07-05 23:32:18.326903 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-05 23:32:18.326913 | orchestrator | skipping: no hosts matched 2025-07-05 23:32:18.326923 | orchestrator | 2025-07-05 23:32:18.326933 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-05 23:32:18.326943 | orchestrator | skipping: no hosts matched 2025-07-05 23:32:18.326953 | orchestrator | 2025-07-05 23:32:18.326963 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-05 23:32:18.326972 | orchestrator | skipping: no hosts matched 2025-07-05 23:32:18.326982 | orchestrator | 2025-07-05 23:32:18.326992 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-05 23:32:18.327002 | orchestrator | 2025-07-05 23:32:18.327011 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-05 23:32:18.327021 | orchestrator | Saturday 05 July 2025 23:32:17 +0000 (0:02:51.167) 0:02:56.223 ********* 2025-07-05 23:32:18.327031 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:32:18.327041 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:32:18.327051 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:32:18.327062 | orchestrator | 2025-07-05 23:32:18.327072 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-05 23:32:18.327082 | orchestrator | Saturday 05 July 2025 23:32:17 +0000 (0:00:00.306) 0:02:56.529 ********* 2025-07-05 23:32:18.327092 | orchestrator | skipping: [testbed-node-0] 2025-07-05 23:32:18.327102 | orchestrator | skipping: [testbed-node-1] 2025-07-05 23:32:18.327112 | orchestrator | skipping: [testbed-node-2] 2025-07-05 23:32:18.327122 | orchestrator | 2025-07-05 23:32:18.327132 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:32:18.327158 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-05 23:32:18.327170 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-05 23:32:18.327180 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-05 23:32:18.327190 | orchestrator | 2025-07-05 23:32:18.327200 | orchestrator | 2025-07-05 23:32:18.327209 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:32:18.327218 | orchestrator | Saturday 05 July 2025 23:32:18 +0000 (0:00:00.372) 0:02:56.901 ********* 2025-07-05 23:32:18.327227 | orchestrator | =============================================================================== 2025-07-05 23:32:18.327251 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 171.17s 2025-07-05 23:32:18.327260 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.09s 2025-07-05 23:32:18.327269 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-07-05 23:32:18.327277 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2025-07-05 23:32:18.327286 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-07-05 23:32:18.327295 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.37s 2025-07-05 23:32:18.327303 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-07-05 23:32:18.327312 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-07-05 23:32:18.565133 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-07-05 23:32:18.572965 | orchestrator | + set -e 2025-07-05 23:32:18.573044 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-05 23:32:18.573066 | orchestrator | ++ export INTERACTIVE=false 2025-07-05 23:32:18.573085 | orchestrator | ++ INTERACTIVE=false 2025-07-05 23:32:18.573104 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-05 23:32:18.573122 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-05 23:32:18.573141 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-05 23:32:18.573159 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-05 23:32:18.575888 | orchestrator | 2025-07-05 23:32:18.575929 | orchestrator | # OpenStack endpoints 2025-07-05 23:32:18.575942 | orchestrator | 2025-07-05 23:32:18.575953 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-05 23:32:18.575964 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-05 23:32:18.575976 | orchestrator | + export OS_CLOUD=admin 2025-07-05 23:32:18.575987 | orchestrator | + OS_CLOUD=admin 2025-07-05 23:32:18.575998 | orchestrator | + echo 2025-07-05 23:32:18.576009 | orchestrator | + echo '# OpenStack endpoints' 2025-07-05 23:32:18.576019 | orchestrator | + echo 2025-07-05 23:32:18.576030 | orchestrator | + openstack endpoint list 2025-07-05 23:32:22.147710 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-05 23:32:22.147798 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-07-05 23:32:22.147807 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-05 23:32:22.147814 | orchestrator | | 0a8df5e10b0742d8a935e446d5416eea | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-05 23:32:22.147821 | orchestrator | | 0bdd43902c3a4c56877c2276e4e15e4a | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-07-05 23:32:22.147841 | orchestrator | | 0d751a972ea048ce8ecdff7209ed295c | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-07-05 23:32:22.147849 | orchestrator | | 3492222d15e54dfabe8cae7c45056a4d | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-07-05 23:32:22.147855 | orchestrator | | 3edf10a6b9134c6e91e39a1e6f0784a2 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-05 23:32:22.147862 | orchestrator | | 4a44df64c83947dd94dba44e6803c929 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-07-05 23:32:22.147869 | orchestrator | | 654f96891cf549ac9c0e111034214f11 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-07-05 23:32:22.147876 | orchestrator | | 8007119e8f00427ab27d312c97355493 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-05 23:32:22.147882 | orchestrator | | 9c66d889c7154c7e9df573d601bd3424 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-07-05 23:32:22.147889 | orchestrator | | 9c80596e2b4840d2936e4fc9c57d66b0 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-07-05 23:32:22.147896 | orchestrator | | adac4e90c353422cac7b6967f08886b9 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-07-05 23:32:22.147902 | orchestrator | | b466f0c01e7c4776abad62d9735f6ac0 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-07-05 23:32:22.147909 | orchestrator | | c3091a345dd74a82bc48d8a29d61ad90 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-07-05 23:32:22.147916 | orchestrator | | c8c4465941434b42b6db54461b40f876 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-07-05 23:32:22.147941 | orchestrator | | ca34cd6eeeab498e83d32e9574c0397a | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-07-05 23:32:22.147948 | orchestrator | | cbeafdcdd63648cc952990fbcc2c2f85 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-07-05 23:32:22.147954 | orchestrator | | d65de64939584cd7b2c38e2e4ab82dae | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-07-05 23:32:22.147961 | orchestrator | | d85ac4ddca934a46b0ab98990e8720d8 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-07-05 23:32:22.147968 | orchestrator | | ea75107e31194a89853edf549171c435 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-07-05 23:32:22.147974 | orchestrator | | f0aa598d11124108a5a047e264bdebeb | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-07-05 23:32:22.147994 | orchestrator | | f65afa6d49d04a75a85dd438e9d06db0 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-07-05 23:32:22.148001 | orchestrator | | fd2d8a62995344d7b258726660626a49 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-05 23:32:22.148008 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-05 23:32:22.377116 | orchestrator | 2025-07-05 23:32:22.377217 | orchestrator | # Cinder 2025-07-05 23:32:22.377231 | orchestrator | 2025-07-05 23:32:22.377243 | orchestrator | + echo 2025-07-05 23:32:22.377255 | orchestrator | + echo '# Cinder' 2025-07-05 23:32:22.377266 | orchestrator | + echo 2025-07-05 23:32:22.377278 | orchestrator | + openstack volume service list 2025-07-05 23:32:25.532335 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-05 23:32:25.532447 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-07-05 23:32:25.532462 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-05 23:32:25.532473 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-05T23:32:15.000000 | 2025-07-05 23:32:25.532503 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-05T23:32:18.000000 | 2025-07-05 23:32:25.532515 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-05T23:32:18.000000 | 2025-07-05 23:32:25.532527 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-07-05T23:32:22.000000 | 2025-07-05 23:32:25.532539 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-07-05T23:32:23.000000 | 2025-07-05 23:32:25.532550 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-07-05T23:32:24.000000 | 2025-07-05 23:32:25.532629 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-07-05T23:32:15.000000 | 2025-07-05 23:32:25.532641 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-07-05T23:32:15.000000 | 2025-07-05 23:32:25.532652 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-07-05T23:32:15.000000 | 2025-07-05 23:32:25.532664 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-05 23:32:25.775983 | orchestrator | 2025-07-05 23:32:25.776101 | orchestrator | # Neutron 2025-07-05 23:32:25.776117 | orchestrator | 2025-07-05 23:32:25.776130 | orchestrator | + echo 2025-07-05 23:32:25.776142 | orchestrator | + echo '# Neutron' 2025-07-05 23:32:25.776155 | orchestrator | + echo 2025-07-05 23:32:25.776194 | orchestrator | + openstack network agent list 2025-07-05 23:32:28.581008 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-05 23:32:28.581116 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-07-05 23:32:28.581130 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-05 23:32:28.581142 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-07-05 23:32:28.581153 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-07-05 23:32:28.581164 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-07-05 23:32:28.581175 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-07-05 23:32:28.581186 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-07-05 23:32:28.581197 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-07-05 23:32:28.581208 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-05 23:32:28.581219 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-05 23:32:28.581229 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-05 23:32:28.581241 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-05 23:32:28.822182 | orchestrator | + openstack network service provider list 2025-07-05 23:32:31.331451 | orchestrator | +---------------+------+---------+ 2025-07-05 23:32:31.331609 | orchestrator | | Service Type | Name | Default | 2025-07-05 23:32:31.331629 | orchestrator | +---------------+------+---------+ 2025-07-05 23:32:31.331638 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-07-05 23:32:31.331645 | orchestrator | +---------------+------+---------+ 2025-07-05 23:32:31.567917 | orchestrator | 2025-07-05 23:32:31.568022 | orchestrator | # Nova 2025-07-05 23:32:31.568038 | orchestrator | 2025-07-05 23:32:31.568050 | orchestrator | + echo 2025-07-05 23:32:31.568062 | orchestrator | + echo '# Nova' 2025-07-05 23:32:31.568074 | orchestrator | + echo 2025-07-05 23:32:31.568085 | orchestrator | + openstack compute service list 2025-07-05 23:32:34.861528 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-05 23:32:34.861760 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-07-05 23:32:34.861782 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-05 23:32:34.861794 | orchestrator | | f28ae079-1bbe-4f05-836a-78443c4cf89d | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-05T23:32:33.000000 | 2025-07-05 23:32:34.861805 | orchestrator | | 948cb6f7-5b54-49ca-9d12-a6d8ea95be5e | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-05T23:32:28.000000 | 2025-07-05 23:32:34.861818 | orchestrator | | d5c5c313-1cb4-47e3-aa47-ad6ebfb9a4e0 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-05T23:32:28.000000 | 2025-07-05 23:32:34.861849 | orchestrator | | ad40fddf-ce7c-43cf-8f0c-5063ab83c72c | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-07-05T23:32:25.000000 | 2025-07-05 23:32:34.861896 | orchestrator | | bdc6e6e4-b122-4a0a-86a9-c759b01cb38c | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-07-05T23:32:28.000000 | 2025-07-05 23:32:34.861924 | orchestrator | | eec110ef-cfaa-4dc8-b38d-ef2a593d8b98 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-07-05T23:32:30.000000 | 2025-07-05 23:32:34.861942 | orchestrator | | 2fb12ea6-aad8-4919-8cf2-120b864c805e | nova-compute | testbed-node-4 | nova | enabled | up | 2025-07-05T23:32:26.000000 | 2025-07-05 23:32:34.861960 | orchestrator | | bce80b9c-1992-470f-81fb-e8a5833f0c13 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-07-05T23:32:26.000000 | 2025-07-05 23:32:34.861976 | orchestrator | | 707c0ef6-1daa-4437-9b32-f8e7ff614b90 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-07-05T23:32:27.000000 | 2025-07-05 23:32:34.861994 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-05 23:32:35.109015 | orchestrator | + openstack hypervisor list 2025-07-05 23:32:39.315666 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-05 23:32:39.315805 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-07-05 23:32:39.315836 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-05 23:32:39.315859 | orchestrator | | 34e6f720-4dca-4d84-b600-7cfc11cfb9c8 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-07-05 23:32:39.315871 | orchestrator | | 3f58b975-1fee-49de-99c5-dc00b8c26bbc | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-07-05 23:32:39.315882 | orchestrator | | 8e7b1bf6-d105-4f00-91a6-b74ceeaea64b | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-07-05 23:32:39.315894 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-05 23:32:39.582096 | orchestrator | 2025-07-05 23:32:39.582211 | orchestrator | # Run OpenStack test play 2025-07-05 23:32:39.582222 | orchestrator | 2025-07-05 23:32:39.582230 | orchestrator | + echo 2025-07-05 23:32:39.582240 | orchestrator | + echo '# Run OpenStack test play' 2025-07-05 23:32:39.582249 | orchestrator | + echo 2025-07-05 23:32:39.582257 | orchestrator | + osism apply --environment openstack test 2025-07-05 23:32:41.481169 | orchestrator | 2025-07-05 23:32:41 | INFO  | Trying to run play test in environment openstack 2025-07-05 23:32:51.597318 | orchestrator | 2025-07-05 23:32:51 | INFO  | Task 7a9847a0-4361-45c9-ba46-afcdb699b02e (test) was prepared for execution. 2025-07-05 23:32:51.597454 | orchestrator | 2025-07-05 23:32:51 | INFO  | It takes a moment until task 7a9847a0-4361-45c9-ba46-afcdb699b02e (test) has been started and output is visible here. 2025-07-05 23:38:44.591583 | orchestrator | 2025-07-05 23:38:44.591769 | orchestrator | PLAY [Create test project] ***************************************************** 2025-07-05 23:38:44.591788 | orchestrator | 2025-07-05 23:38:44.591800 | orchestrator | TASK [Create test domain] ****************************************************** 2025-07-05 23:38:44.591812 | orchestrator | Saturday 05 July 2025 23:32:55 +0000 (0:00:00.092) 0:00:00.092 ********* 2025-07-05 23:38:44.591824 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.591837 | orchestrator | 2025-07-05 23:38:44.591849 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-07-05 23:38:44.591861 | orchestrator | Saturday 05 July 2025 23:32:58 +0000 (0:00:03.503) 0:00:03.596 ********* 2025-07-05 23:38:44.591872 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.591884 | orchestrator | 2025-07-05 23:38:44.591895 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-07-05 23:38:44.591907 | orchestrator | Saturday 05 July 2025 23:33:03 +0000 (0:00:04.090) 0:00:07.686 ********* 2025-07-05 23:38:44.591918 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.591930 | orchestrator | 2025-07-05 23:38:44.591942 | orchestrator | TASK [Create test project] ***************************************************** 2025-07-05 23:38:44.591981 | orchestrator | Saturday 05 July 2025 23:33:09 +0000 (0:00:06.001) 0:00:13.688 ********* 2025-07-05 23:38:44.591993 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592004 | orchestrator | 2025-07-05 23:38:44.592015 | orchestrator | TASK [Create test user] ******************************************************** 2025-07-05 23:38:44.592027 | orchestrator | Saturday 05 July 2025 23:33:12 +0000 (0:00:03.782) 0:00:17.471 ********* 2025-07-05 23:38:44.592038 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592050 | orchestrator | 2025-07-05 23:38:44.592061 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-07-05 23:38:44.592073 | orchestrator | Saturday 05 July 2025 23:33:17 +0000 (0:00:04.157) 0:00:21.629 ********* 2025-07-05 23:38:44.592084 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-07-05 23:38:44.592097 | orchestrator | changed: [localhost] => (item=member) 2025-07-05 23:38:44.592110 | orchestrator | changed: [localhost] => (item=creator) 2025-07-05 23:38:44.592121 | orchestrator | 2025-07-05 23:38:44.592132 | orchestrator | TASK [Create test server group] ************************************************ 2025-07-05 23:38:44.592143 | orchestrator | Saturday 05 July 2025 23:33:28 +0000 (0:00:11.843) 0:00:33.472 ********* 2025-07-05 23:38:44.592155 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592166 | orchestrator | 2025-07-05 23:38:44.592178 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-07-05 23:38:44.592189 | orchestrator | Saturday 05 July 2025 23:33:33 +0000 (0:00:04.762) 0:00:38.235 ********* 2025-07-05 23:38:44.592200 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592211 | orchestrator | 2025-07-05 23:38:44.592223 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-07-05 23:38:44.592234 | orchestrator | Saturday 05 July 2025 23:33:38 +0000 (0:00:04.716) 0:00:42.951 ********* 2025-07-05 23:38:44.592246 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592257 | orchestrator | 2025-07-05 23:38:44.592268 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-07-05 23:38:44.592280 | orchestrator | Saturday 05 July 2025 23:33:42 +0000 (0:00:04.137) 0:00:47.089 ********* 2025-07-05 23:38:44.592291 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592303 | orchestrator | 2025-07-05 23:38:44.592314 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-07-05 23:38:44.592326 | orchestrator | Saturday 05 July 2025 23:33:46 +0000 (0:00:03.825) 0:00:50.914 ********* 2025-07-05 23:38:44.592337 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592348 | orchestrator | 2025-07-05 23:38:44.592359 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-07-05 23:38:44.592371 | orchestrator | Saturday 05 July 2025 23:33:50 +0000 (0:00:04.066) 0:00:54.981 ********* 2025-07-05 23:38:44.592382 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592393 | orchestrator | 2025-07-05 23:38:44.592404 | orchestrator | TASK [Create test network topology] ******************************************** 2025-07-05 23:38:44.592416 | orchestrator | Saturday 05 July 2025 23:33:54 +0000 (0:00:03.782) 0:00:58.763 ********* 2025-07-05 23:38:44.592427 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592439 | orchestrator | 2025-07-05 23:38:44.592450 | orchestrator | TASK [Create test instances] *************************************************** 2025-07-05 23:38:44.592462 | orchestrator | Saturday 05 July 2025 23:34:08 +0000 (0:00:14.689) 0:01:13.453 ********* 2025-07-05 23:38:44.592473 | orchestrator | changed: [localhost] => (item=test) 2025-07-05 23:38:44.592485 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-05 23:38:44.592496 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-05 23:38:44.592507 | orchestrator | 2025-07-05 23:38:44.592518 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-05 23:38:44.592530 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-05 23:38:44.592541 | orchestrator | 2025-07-05 23:38:44.592552 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-05 23:38:44.592564 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-05 23:38:44.592583 | orchestrator | 2025-07-05 23:38:44.592663 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-07-05 23:38:44.592676 | orchestrator | Saturday 05 July 2025 23:37:22 +0000 (0:03:13.984) 0:04:27.437 ********* 2025-07-05 23:38:44.592686 | orchestrator | changed: [localhost] => (item=test) 2025-07-05 23:38:44.592697 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-05 23:38:44.592708 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-05 23:38:44.592720 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-05 23:38:44.592731 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-05 23:38:44.592742 | orchestrator | 2025-07-05 23:38:44.592753 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-07-05 23:38:44.592768 | orchestrator | Saturday 05 July 2025 23:37:46 +0000 (0:00:23.608) 0:04:51.046 ********* 2025-07-05 23:38:44.592780 | orchestrator | changed: [localhost] => (item=test) 2025-07-05 23:38:44.592791 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-05 23:38:44.592803 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-05 23:38:44.592831 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-05 23:38:44.592865 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-05 23:38:44.592877 | orchestrator | 2025-07-05 23:38:44.592888 | orchestrator | TASK [Create test volume] ****************************************************** 2025-07-05 23:38:44.592899 | orchestrator | Saturday 05 July 2025 23:38:19 +0000 (0:00:32.671) 0:05:23.718 ********* 2025-07-05 23:38:44.592910 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592921 | orchestrator | 2025-07-05 23:38:44.592932 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-07-05 23:38:44.592944 | orchestrator | Saturday 05 July 2025 23:38:25 +0000 (0:00:06.690) 0:05:30.409 ********* 2025-07-05 23:38:44.592955 | orchestrator | changed: [localhost] 2025-07-05 23:38:44.592965 | orchestrator | 2025-07-05 23:38:44.592976 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-07-05 23:38:44.592987 | orchestrator | Saturday 05 July 2025 23:38:39 +0000 (0:00:13.384) 0:05:43.793 ********* 2025-07-05 23:38:44.592999 | orchestrator | ok: [localhost] 2025-07-05 23:38:44.593010 | orchestrator | 2025-07-05 23:38:44.593021 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-07-05 23:38:44.593032 | orchestrator | Saturday 05 July 2025 23:38:44 +0000 (0:00:05.108) 0:05:48.901 ********* 2025-07-05 23:38:44.593043 | orchestrator | ok: [localhost] => { 2025-07-05 23:38:44.593055 | orchestrator |  "msg": "192.168.112.179" 2025-07-05 23:38:44.593066 | orchestrator | } 2025-07-05 23:38:44.593077 | orchestrator | 2025-07-05 23:38:44.593087 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-05 23:38:44.593097 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-05 23:38:44.593108 | orchestrator | 2025-07-05 23:38:44.593118 | orchestrator | 2025-07-05 23:38:44.593127 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-05 23:38:44.593137 | orchestrator | Saturday 05 July 2025 23:38:44 +0000 (0:00:00.039) 0:05:48.941 ********* 2025-07-05 23:38:44.593147 | orchestrator | =============================================================================== 2025-07-05 23:38:44.593157 | orchestrator | Create test instances ------------------------------------------------- 193.98s 2025-07-05 23:38:44.593167 | orchestrator | Add tag to instances --------------------------------------------------- 32.67s 2025-07-05 23:38:44.593176 | orchestrator | Add metadata to instances ---------------------------------------------- 23.61s 2025-07-05 23:38:44.593186 | orchestrator | Create test network topology ------------------------------------------- 14.69s 2025-07-05 23:38:44.593196 | orchestrator | Attach test volume ----------------------------------------------------- 13.38s 2025-07-05 23:38:44.593205 | orchestrator | Add member roles to user test ------------------------------------------ 11.84s 2025-07-05 23:38:44.593220 | orchestrator | Create test volume ------------------------------------------------------ 6.69s 2025-07-05 23:38:44.593237 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.00s 2025-07-05 23:38:44.593247 | orchestrator | Create floating ip address ---------------------------------------------- 5.11s 2025-07-05 23:38:44.593257 | orchestrator | Create test server group ------------------------------------------------ 4.76s 2025-07-05 23:38:44.593267 | orchestrator | Create ssh security group ----------------------------------------------- 4.72s 2025-07-05 23:38:44.593276 | orchestrator | Create test user -------------------------------------------------------- 4.16s 2025-07-05 23:38:44.593286 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.14s 2025-07-05 23:38:44.593296 | orchestrator | Create test-admin user -------------------------------------------------- 4.09s 2025-07-05 23:38:44.593305 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.07s 2025-07-05 23:38:44.593315 | orchestrator | Create icmp security group ---------------------------------------------- 3.83s 2025-07-05 23:38:44.593324 | orchestrator | Create test project ----------------------------------------------------- 3.78s 2025-07-05 23:38:44.593334 | orchestrator | Create test keypair ----------------------------------------------------- 3.78s 2025-07-05 23:38:44.593344 | orchestrator | Create test domain ------------------------------------------------------ 3.50s 2025-07-05 23:38:44.593354 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-07-05 23:38:44.844724 | orchestrator | + server_list 2025-07-05 23:38:44.844824 | orchestrator | + openstack --os-cloud test server list 2025-07-05 23:38:48.448840 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-05 23:38:48.448966 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-07-05 23:38:48.448983 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-05 23:38:48.448994 | orchestrator | | a65b3d8a-ed05-44dd-b101-d5eebeb92e91 | test-4 | ACTIVE | auto_allocated_network=10.42.0.43, 192.168.112.169 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-05 23:38:48.449005 | orchestrator | | 92da681d-1e0a-4cef-9266-5b6b6e1abc9f | test-3 | ACTIVE | auto_allocated_network=10.42.0.42, 192.168.112.105 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-05 23:38:48.449016 | orchestrator | | 77d615d9-1351-4c0f-b6e9-d327af53e884 | test-2 | ACTIVE | auto_allocated_network=10.42.0.45, 192.168.112.181 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-05 23:38:48.449027 | orchestrator | | dda2e452-912b-49b5-aab6-4a80a320b26a | test-1 | ACTIVE | auto_allocated_network=10.42.0.5, 192.168.112.186 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-05 23:38:48.449038 | orchestrator | | c98be92f-4288-4892-81c4-54e2e7421664 | test | ACTIVE | auto_allocated_network=10.42.0.59, 192.168.112.179 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-05 23:38:48.449049 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-05 23:38:48.701680 | orchestrator | + openstack --os-cloud test server show test 2025-07-05 23:38:52.090905 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:38:52.090988 | orchestrator | | Field | Value | 2025-07-05 23:38:52.090996 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:38:52.091019 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-05 23:38:52.091036 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-05 23:38:52.091044 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-05 23:38:52.091050 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-07-05 23:38:52.091056 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-05 23:38:52.091061 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-05 23:38:52.091067 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-05 23:38:52.091073 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-05 23:38:52.091091 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-05 23:38:52.091098 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-05 23:38:52.091104 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-05 23:38:52.091114 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-05 23:38:52.091120 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-05 23:38:52.091129 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-05 23:38:52.091135 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-05 23:38:52.091141 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-05T23:34:39.000000 | 2025-07-05 23:38:52.091147 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-05 23:38:52.091153 | orchestrator | | accessIPv4 | | 2025-07-05 23:38:52.091159 | orchestrator | | accessIPv6 | | 2025-07-05 23:38:52.091165 | orchestrator | | addresses | auto_allocated_network=10.42.0.59, 192.168.112.179 | 2025-07-05 23:38:52.091175 | orchestrator | | config_drive | | 2025-07-05 23:38:52.091185 | orchestrator | | created | 2025-07-05T23:34:17Z | 2025-07-05 23:38:52.091191 | orchestrator | | description | None | 2025-07-05 23:38:52.091197 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-05 23:38:52.091203 | orchestrator | | hostId | 5f20ae5709d1023e3f5aa969fdf8d85dfd0dfd6fde4ca173491f524a | 2025-07-05 23:38:52.091212 | orchestrator | | host_status | None | 2025-07-05 23:38:52.091219 | orchestrator | | id | c98be92f-4288-4892-81c4-54e2e7421664 | 2025-07-05 23:38:52.091225 | orchestrator | | image | Cirros 0.6.2 (51e80e40-4cbc-4d1e-9360-a7565579f9e1) | 2025-07-05 23:38:52.091231 | orchestrator | | key_name | test | 2025-07-05 23:38:52.091236 | orchestrator | | locked | False | 2025-07-05 23:38:52.091242 | orchestrator | | locked_reason | None | 2025-07-05 23:38:52.091248 | orchestrator | | name | test | 2025-07-05 23:38:52.091262 | orchestrator | | pinned_availability_zone | None | 2025-07-05 23:38:52.091268 | orchestrator | | progress | 0 | 2025-07-05 23:38:52.091274 | orchestrator | | project_id | 120c2b44b1174c7b9269857bbbabaa3f | 2025-07-05 23:38:52.091280 | orchestrator | | properties | hostname='test' | 2025-07-05 23:38:52.091286 | orchestrator | | security_groups | name='ssh' | 2025-07-05 23:38:52.091292 | orchestrator | | | name='icmp' | 2025-07-05 23:38:52.091298 | orchestrator | | server_groups | None | 2025-07-05 23:38:52.091308 | orchestrator | | status | ACTIVE | 2025-07-05 23:38:52.091315 | orchestrator | | tags | test | 2025-07-05 23:38:52.091321 | orchestrator | | trusted_image_certificates | None | 2025-07-05 23:38:52.091326 | orchestrator | | updated | 2025-07-05T23:37:27Z | 2025-07-05 23:38:52.091339 | orchestrator | | user_id | 72ede28e7d4b4b3689b719a0e74a225f | 2025-07-05 23:38:52.091345 | orchestrator | | volumes_attached | delete_on_termination='False', id='770ca264-c07d-475f-9e16-463363220e42' | 2025-07-05 23:38:52.094570 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:38:52.334719 | orchestrator | + openstack --os-cloud test server show test-1 2025-07-05 23:38:55.369864 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:38:55.369965 | orchestrator | | Field | Value | 2025-07-05 23:38:55.370004 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:38:55.370082 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-05 23:38:55.370104 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-05 23:38:55.370120 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-05 23:38:55.370135 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-07-05 23:38:55.370149 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-05 23:38:55.370197 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-05 23:38:55.370211 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-05 23:38:55.370221 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-05 23:38:55.370247 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-05 23:38:55.370256 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-05 23:38:55.370272 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-05 23:38:55.370281 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-05 23:38:55.370290 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-05 23:38:55.370299 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-05 23:38:55.370308 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-05 23:38:55.370323 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-05T23:35:23.000000 | 2025-07-05 23:38:55.370332 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-05 23:38:55.370340 | orchestrator | | accessIPv4 | | 2025-07-05 23:38:55.370349 | orchestrator | | accessIPv6 | | 2025-07-05 23:38:55.370358 | orchestrator | | addresses | auto_allocated_network=10.42.0.5, 192.168.112.186 | 2025-07-05 23:38:55.370373 | orchestrator | | config_drive | | 2025-07-05 23:38:55.370382 | orchestrator | | created | 2025-07-05T23:35:01Z | 2025-07-05 23:38:55.370395 | orchestrator | | description | None | 2025-07-05 23:38:55.370404 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-05 23:38:55.370414 | orchestrator | | hostId | f1b9d24dba39f99e5170fb0ecf19fe284559161d64d2d5014acf3c6b | 2025-07-05 23:38:55.370425 | orchestrator | | host_status | None | 2025-07-05 23:38:55.370440 | orchestrator | | id | dda2e452-912b-49b5-aab6-4a80a320b26a | 2025-07-05 23:38:55.370451 | orchestrator | | image | Cirros 0.6.2 (51e80e40-4cbc-4d1e-9360-a7565579f9e1) | 2025-07-05 23:38:55.370462 | orchestrator | | key_name | test | 2025-07-05 23:38:55.370472 | orchestrator | | locked | False | 2025-07-05 23:38:55.370483 | orchestrator | | locked_reason | None | 2025-07-05 23:38:55.370495 | orchestrator | | name | test-1 | 2025-07-05 23:38:55.370510 | orchestrator | | pinned_availability_zone | None | 2025-07-05 23:38:55.370521 | orchestrator | | progress | 0 | 2025-07-05 23:38:55.370536 | orchestrator | | project_id | 120c2b44b1174c7b9269857bbbabaa3f | 2025-07-05 23:38:55.370547 | orchestrator | | properties | hostname='test-1' | 2025-07-05 23:38:55.370557 | orchestrator | | security_groups | name='ssh' | 2025-07-05 23:38:55.370572 | orchestrator | | | name='icmp' | 2025-07-05 23:38:55.370583 | orchestrator | | server_groups | None | 2025-07-05 23:38:55.370645 | orchestrator | | status | ACTIVE | 2025-07-05 23:38:55.370656 | orchestrator | | tags | test | 2025-07-05 23:38:55.370667 | orchestrator | | trusted_image_certificates | None | 2025-07-05 23:38:55.370678 | orchestrator | | updated | 2025-07-05T23:37:32Z | 2025-07-05 23:38:55.370693 | orchestrator | | user_id | 72ede28e7d4b4b3689b719a0e74a225f | 2025-07-05 23:38:55.370703 | orchestrator | | volumes_attached | | 2025-07-05 23:38:55.374331 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:38:55.627248 | orchestrator | + openstack --os-cloud test server show test-2 2025-07-05 23:38:58.757364 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:38:58.757514 | orchestrator | | Field | Value | 2025-07-05 23:38:58.757570 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:38:58.757584 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-05 23:38:58.757649 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-05 23:38:58.757662 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-05 23:38:58.757692 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-07-05 23:38:58.757704 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-05 23:38:58.757716 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-05 23:38:58.757756 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-05 23:38:58.757768 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-05 23:38:58.757804 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-05 23:38:58.757832 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-05 23:38:58.757844 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-05 23:38:58.757858 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-05 23:38:58.757870 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-05 23:38:58.757883 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-05 23:38:58.757896 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-05 23:38:58.757910 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-05T23:36:04.000000 | 2025-07-05 23:38:58.757923 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-05 23:38:58.757936 | orchestrator | | accessIPv4 | | 2025-07-05 23:38:58.757949 | orchestrator | | accessIPv6 | | 2025-07-05 23:38:58.757962 | orchestrator | | addresses | auto_allocated_network=10.42.0.45, 192.168.112.181 | 2025-07-05 23:38:58.757996 | orchestrator | | config_drive | | 2025-07-05 23:38:58.758010 | orchestrator | | created | 2025-07-05T23:35:40Z | 2025-07-05 23:38:58.758084 | orchestrator | | description | None | 2025-07-05 23:38:58.758098 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-05 23:38:58.758110 | orchestrator | | hostId | e108eef9318c5c124862055ee599ee8269d5967dc791e91431d4e552 | 2025-07-05 23:38:58.758121 | orchestrator | | host_status | None | 2025-07-05 23:38:58.758132 | orchestrator | | id | 77d615d9-1351-4c0f-b6e9-d327af53e884 | 2025-07-05 23:38:58.758143 | orchestrator | | image | Cirros 0.6.2 (51e80e40-4cbc-4d1e-9360-a7565579f9e1) | 2025-07-05 23:38:58.758154 | orchestrator | | key_name | test | 2025-07-05 23:38:58.758165 | orchestrator | | locked | False | 2025-07-05 23:38:58.758176 | orchestrator | | locked_reason | None | 2025-07-05 23:38:58.758199 | orchestrator | | name | test-2 | 2025-07-05 23:38:58.758219 | orchestrator | | pinned_availability_zone | None | 2025-07-05 23:38:58.758232 | orchestrator | | progress | 0 | 2025-07-05 23:38:58.758243 | orchestrator | | project_id | 120c2b44b1174c7b9269857bbbabaa3f | 2025-07-05 23:38:58.758254 | orchestrator | | properties | hostname='test-2' | 2025-07-05 23:38:58.758265 | orchestrator | | security_groups | name='ssh' | 2025-07-05 23:38:58.758276 | orchestrator | | | name='icmp' | 2025-07-05 23:38:58.758288 | orchestrator | | server_groups | None | 2025-07-05 23:38:58.758299 | orchestrator | | status | ACTIVE | 2025-07-05 23:38:58.758310 | orchestrator | | tags | test | 2025-07-05 23:38:58.758321 | orchestrator | | trusted_image_certificates | None | 2025-07-05 23:38:58.758338 | orchestrator | | updated | 2025-07-05T23:37:36Z | 2025-07-05 23:38:58.758360 | orchestrator | | user_id | 72ede28e7d4b4b3689b719a0e74a225f | 2025-07-05 23:38:58.758372 | orchestrator | | volumes_attached | | 2025-07-05 23:38:58.761467 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:38:59.021270 | orchestrator | + openstack --os-cloud test server show test-3 2025-07-05 23:39:02.185843 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:39:02.185957 | orchestrator | | Field | Value | 2025-07-05 23:39:02.185973 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:39:02.185985 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-05 23:39:02.185996 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-05 23:39:02.186008 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-05 23:39:02.186106 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-07-05 23:39:02.186122 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-05 23:39:02.186133 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-05 23:39:02.186159 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-05 23:39:02.186171 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-05 23:39:02.186201 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-05 23:39:02.186214 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-05 23:39:02.186225 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-05 23:39:02.186236 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-05 23:39:02.186247 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-05 23:39:02.186258 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-05 23:39:02.186276 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-05 23:39:02.186304 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-05T23:36:39.000000 | 2025-07-05 23:39:02.186316 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-05 23:39:02.186339 | orchestrator | | accessIPv4 | | 2025-07-05 23:39:02.186350 | orchestrator | | accessIPv6 | | 2025-07-05 23:39:02.186362 | orchestrator | | addresses | auto_allocated_network=10.42.0.42, 192.168.112.105 | 2025-07-05 23:39:02.186381 | orchestrator | | config_drive | | 2025-07-05 23:39:02.186394 | orchestrator | | created | 2025-07-05T23:36:23Z | 2025-07-05 23:39:02.186408 | orchestrator | | description | None | 2025-07-05 23:39:02.186422 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-05 23:39:02.186443 | orchestrator | | hostId | 5f20ae5709d1023e3f5aa969fdf8d85dfd0dfd6fde4ca173491f524a | 2025-07-05 23:39:02.186464 | orchestrator | | host_status | None | 2025-07-05 23:39:02.186477 | orchestrator | | id | 92da681d-1e0a-4cef-9266-5b6b6e1abc9f | 2025-07-05 23:39:02.186490 | orchestrator | | image | Cirros 0.6.2 (51e80e40-4cbc-4d1e-9360-a7565579f9e1) | 2025-07-05 23:39:02.186504 | orchestrator | | key_name | test | 2025-07-05 23:39:02.186521 | orchestrator | | locked | False | 2025-07-05 23:39:02.186535 | orchestrator | | locked_reason | None | 2025-07-05 23:39:02.186549 | orchestrator | | name | test-3 | 2025-07-05 23:39:02.186568 | orchestrator | | pinned_availability_zone | None | 2025-07-05 23:39:02.186582 | orchestrator | | progress | 0 | 2025-07-05 23:39:02.186627 | orchestrator | | project_id | 120c2b44b1174c7b9269857bbbabaa3f | 2025-07-05 23:39:02.186641 | orchestrator | | properties | hostname='test-3' | 2025-07-05 23:39:02.186662 | orchestrator | | security_groups | name='ssh' | 2025-07-05 23:39:02.186676 | orchestrator | | | name='icmp' | 2025-07-05 23:39:02.186689 | orchestrator | | server_groups | None | 2025-07-05 23:39:02.186702 | orchestrator | | status | ACTIVE | 2025-07-05 23:39:02.186715 | orchestrator | | tags | test | 2025-07-05 23:39:02.186733 | orchestrator | | trusted_image_certificates | None | 2025-07-05 23:39:02.186745 | orchestrator | | updated | 2025-07-05T23:37:41Z | 2025-07-05 23:39:02.186762 | orchestrator | | user_id | 72ede28e7d4b4b3689b719a0e74a225f | 2025-07-05 23:39:02.186773 | orchestrator | | volumes_attached | | 2025-07-05 23:39:02.190219 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:39:02.447234 | orchestrator | + openstack --os-cloud test server show test-4 2025-07-05 23:39:05.762988 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:39:05.763093 | orchestrator | | Field | Value | 2025-07-05 23:39:05.763109 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:39:05.763122 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-05 23:39:05.763133 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-05 23:39:05.763144 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-05 23:39:05.763173 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-07-05 23:39:05.763185 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-05 23:39:05.763197 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-05 23:39:05.763209 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-05 23:39:05.763221 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-05 23:39:05.763270 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-05 23:39:05.763284 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-05 23:39:05.763295 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-05 23:39:05.763306 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-05 23:39:05.763317 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-05 23:39:05.763329 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-05 23:39:05.763340 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-05 23:39:05.763356 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-05T23:37:12.000000 | 2025-07-05 23:39:05.763368 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-05 23:39:05.763379 | orchestrator | | accessIPv4 | | 2025-07-05 23:39:05.763390 | orchestrator | | accessIPv6 | | 2025-07-05 23:39:05.763410 | orchestrator | | addresses | auto_allocated_network=10.42.0.43, 192.168.112.169 | 2025-07-05 23:39:05.763428 | orchestrator | | config_drive | | 2025-07-05 23:39:05.763440 | orchestrator | | created | 2025-07-05T23:36:56Z | 2025-07-05 23:39:05.763451 | orchestrator | | description | None | 2025-07-05 23:39:05.763462 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-05 23:39:05.763473 | orchestrator | | hostId | f1b9d24dba39f99e5170fb0ecf19fe284559161d64d2d5014acf3c6b | 2025-07-05 23:39:05.763484 | orchestrator | | host_status | None | 2025-07-05 23:39:05.763500 | orchestrator | | id | a65b3d8a-ed05-44dd-b101-d5eebeb92e91 | 2025-07-05 23:39:05.763511 | orchestrator | | image | Cirros 0.6.2 (51e80e40-4cbc-4d1e-9360-a7565579f9e1) | 2025-07-05 23:39:05.763523 | orchestrator | | key_name | test | 2025-07-05 23:39:05.763547 | orchestrator | | locked | False | 2025-07-05 23:39:05.763558 | orchestrator | | locked_reason | None | 2025-07-05 23:39:05.763569 | orchestrator | | name | test-4 | 2025-07-05 23:39:05.763587 | orchestrator | | pinned_availability_zone | None | 2025-07-05 23:39:05.763631 | orchestrator | | progress | 0 | 2025-07-05 23:39:05.763643 | orchestrator | | project_id | 120c2b44b1174c7b9269857bbbabaa3f | 2025-07-05 23:39:05.763654 | orchestrator | | properties | hostname='test-4' | 2025-07-05 23:39:05.763665 | orchestrator | | security_groups | name='ssh' | 2025-07-05 23:39:05.763676 | orchestrator | | | name='icmp' | 2025-07-05 23:39:05.763692 | orchestrator | | server_groups | None | 2025-07-05 23:39:05.763704 | orchestrator | | status | ACTIVE | 2025-07-05 23:39:05.763722 | orchestrator | | tags | test | 2025-07-05 23:39:05.763733 | orchestrator | | trusted_image_certificates | None | 2025-07-05 23:39:05.763744 | orchestrator | | updated | 2025-07-05T23:37:46Z | 2025-07-05 23:39:05.763760 | orchestrator | | user_id | 72ede28e7d4b4b3689b719a0e74a225f | 2025-07-05 23:39:05.763772 | orchestrator | | volumes_attached | | 2025-07-05 23:39:05.767664 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-05 23:39:06.019295 | orchestrator | + server_ping 2025-07-05 23:39:06.020386 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-05 23:39:06.020424 | orchestrator | ++ tr -d '\r' 2025-07-05 23:39:08.784191 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:39:08.784361 | orchestrator | + ping -c3 192.168.112.181 2025-07-05 23:39:08.799977 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-07-05 23:39:08.800067 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=9.94 ms 2025-07-05 23:39:09.794245 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.64 ms 2025-07-05 23:39:10.795204 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.89 ms 2025-07-05 23:39:10.795309 | orchestrator | 2025-07-05 23:39:10.795421 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-07-05 23:39:10.795438 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:39:10.795450 | orchestrator | rtt min/avg/max/mdev = 1.894/4.827/9.943/3.630 ms 2025-07-05 23:39:10.795462 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:39:10.795485 | orchestrator | + ping -c3 192.168.112.179 2025-07-05 23:39:10.805998 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-07-05 23:39:10.806130 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=6.28 ms 2025-07-05 23:39:11.803773 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.32 ms 2025-07-05 23:39:12.806215 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.22 ms 2025-07-05 23:39:12.806322 | orchestrator | 2025-07-05 23:39:12.806340 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-07-05 23:39:12.806380 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:39:12.806393 | orchestrator | rtt min/avg/max/mdev = 2.220/3.608/6.284/1.892 ms 2025-07-05 23:39:12.806405 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:39:12.806418 | orchestrator | + ping -c3 192.168.112.186 2025-07-05 23:39:12.817741 | orchestrator | PING 192.168.112.186 (192.168.112.186) 56(84) bytes of data. 2025-07-05 23:39:12.817834 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=1 ttl=63 time=7.05 ms 2025-07-05 23:39:13.814352 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=2 ttl=63 time=2.34 ms 2025-07-05 23:39:14.816349 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=3 ttl=63 time=2.05 ms 2025-07-05 23:39:14.816444 | orchestrator | 2025-07-05 23:39:14.816457 | orchestrator | --- 192.168.112.186 ping statistics --- 2025-07-05 23:39:14.816467 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:39:14.816476 | orchestrator | rtt min/avg/max/mdev = 2.051/3.811/7.047/2.290 ms 2025-07-05 23:39:14.816485 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:39:14.816494 | orchestrator | + ping -c3 192.168.112.105 2025-07-05 23:39:14.827242 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-05 23:39:14.827340 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=6.61 ms 2025-07-05 23:39:15.825422 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.61 ms 2025-07-05 23:39:16.826956 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.70 ms 2025-07-05 23:39:16.827059 | orchestrator | 2025-07-05 23:39:16.827076 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-05 23:39:16.827090 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-05 23:39:16.827102 | orchestrator | rtt min/avg/max/mdev = 1.696/3.636/6.606/2.132 ms 2025-07-05 23:39:16.827114 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:39:16.827126 | orchestrator | + ping -c3 192.168.112.169 2025-07-05 23:39:16.841051 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-07-05 23:39:16.841121 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=9.55 ms 2025-07-05 23:39:17.836323 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.34 ms 2025-07-05 23:39:18.836353 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.89 ms 2025-07-05 23:39:18.836457 | orchestrator | 2025-07-05 23:39:18.836473 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-07-05 23:39:18.836487 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:39:18.836499 | orchestrator | rtt min/avg/max/mdev = 1.890/4.593/9.550/3.509 ms 2025-07-05 23:39:18.836977 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-05 23:39:18.837002 | orchestrator | + compute_list 2025-07-05 23:39:18.837588 | orchestrator | + osism manage compute list testbed-node-3 2025-07-05 23:39:22.441230 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:39:22.441353 | orchestrator | | ID | Name | Status | 2025-07-05 23:39:22.441368 | orchestrator | |--------------------------------------+--------+----------| 2025-07-05 23:39:22.441380 | orchestrator | | 92da681d-1e0a-4cef-9266-5b6b6e1abc9f | test-3 | ACTIVE | 2025-07-05 23:39:22.441392 | orchestrator | | c98be92f-4288-4892-81c4-54e2e7421664 | test | ACTIVE | 2025-07-05 23:39:22.441403 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:39:22.694515 | orchestrator | + osism manage compute list testbed-node-4 2025-07-05 23:39:25.865240 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:39:25.867836 | orchestrator | | ID | Name | Status | 2025-07-05 23:39:25.867898 | orchestrator | |--------------------------------------+--------+----------| 2025-07-05 23:39:25.867903 | orchestrator | | a65b3d8a-ed05-44dd-b101-d5eebeb92e91 | test-4 | ACTIVE | 2025-07-05 23:39:25.867908 | orchestrator | | dda2e452-912b-49b5-aab6-4a80a320b26a | test-1 | ACTIVE | 2025-07-05 23:39:25.867912 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:39:26.123468 | orchestrator | + osism manage compute list testbed-node-5 2025-07-05 23:39:29.059349 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:39:29.059509 | orchestrator | | ID | Name | Status | 2025-07-05 23:39:29.059523 | orchestrator | |--------------------------------------+--------+----------| 2025-07-05 23:39:29.059535 | orchestrator | | 77d615d9-1351-4c0f-b6e9-d327af53e884 | test-2 | ACTIVE | 2025-07-05 23:39:29.059552 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:39:29.330119 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-07-05 23:39:32.237540 | orchestrator | 2025-07-05 23:39:32 | INFO  | Live migrating server a65b3d8a-ed05-44dd-b101-d5eebeb92e91 2025-07-05 23:39:45.231161 | orchestrator | 2025-07-05 23:39:45 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:39:47.615323 | orchestrator | 2025-07-05 23:39:47 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:39:50.125911 | orchestrator | 2025-07-05 23:39:50 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:39:52.439528 | orchestrator | 2025-07-05 23:39:52 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:39:54.947247 | orchestrator | 2025-07-05 23:39:54 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:39:57.340303 | orchestrator | 2025-07-05 23:39:57 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:39:59.805471 | orchestrator | 2025-07-05 23:39:59 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) completed with status ACTIVE 2025-07-05 23:39:59.805835 | orchestrator | 2025-07-05 23:39:59 | INFO  | Live migrating server dda2e452-912b-49b5-aab6-4a80a320b26a 2025-07-05 23:40:12.341762 | orchestrator | 2025-07-05 23:40:12 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:40:14.680515 | orchestrator | 2025-07-05 23:40:14 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:40:17.106446 | orchestrator | 2025-07-05 23:40:17 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:40:19.422918 | orchestrator | 2025-07-05 23:40:19 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:40:21.733944 | orchestrator | 2025-07-05 23:40:21 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:40:24.030931 | orchestrator | 2025-07-05 23:40:24 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:40:26.408978 | orchestrator | 2025-07-05 23:40:26 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) completed with status ACTIVE 2025-07-05 23:40:26.693973 | orchestrator | + compute_list 2025-07-05 23:40:26.694182 | orchestrator | + osism manage compute list testbed-node-3 2025-07-05 23:40:29.695355 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:40:29.695472 | orchestrator | | ID | Name | Status | 2025-07-05 23:40:29.695487 | orchestrator | |--------------------------------------+--------+----------| 2025-07-05 23:40:29.695499 | orchestrator | | a65b3d8a-ed05-44dd-b101-d5eebeb92e91 | test-4 | ACTIVE | 2025-07-05 23:40:29.695510 | orchestrator | | 92da681d-1e0a-4cef-9266-5b6b6e1abc9f | test-3 | ACTIVE | 2025-07-05 23:40:29.695522 | orchestrator | | dda2e452-912b-49b5-aab6-4a80a320b26a | test-1 | ACTIVE | 2025-07-05 23:40:29.695533 | orchestrator | | c98be92f-4288-4892-81c4-54e2e7421664 | test | ACTIVE | 2025-07-05 23:40:29.695544 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:40:29.953121 | orchestrator | + osism manage compute list testbed-node-4 2025-07-05 23:40:32.629162 | orchestrator | +------+--------+----------+ 2025-07-05 23:40:32.629270 | orchestrator | | ID | Name | Status | 2025-07-05 23:40:32.629285 | orchestrator | |------+--------+----------| 2025-07-05 23:40:32.629297 | orchestrator | +------+--------+----------+ 2025-07-05 23:40:32.904422 | orchestrator | + osism manage compute list testbed-node-5 2025-07-05 23:40:35.901552 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:40:35.901782 | orchestrator | | ID | Name | Status | 2025-07-05 23:40:35.901813 | orchestrator | |--------------------------------------+--------+----------| 2025-07-05 23:40:35.901834 | orchestrator | | 77d615d9-1351-4c0f-b6e9-d327af53e884 | test-2 | ACTIVE | 2025-07-05 23:40:35.901854 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:40:36.179198 | orchestrator | + server_ping 2025-07-05 23:40:36.179927 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-05 23:40:36.180207 | orchestrator | ++ tr -d '\r' 2025-07-05 23:40:38.907725 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:40:38.907828 | orchestrator | + ping -c3 192.168.112.181 2025-07-05 23:40:38.920873 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-07-05 23:40:38.920902 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=11.1 ms 2025-07-05 23:40:39.914188 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.51 ms 2025-07-05 23:40:40.915228 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.64 ms 2025-07-05 23:40:40.916090 | orchestrator | 2025-07-05 23:40:40.916127 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-07-05 23:40:40.916142 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:40:40.916157 | orchestrator | rtt min/avg/max/mdev = 1.642/5.069/11.060/4.250 ms 2025-07-05 23:40:40.916187 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:40:40.916200 | orchestrator | + ping -c3 192.168.112.179 2025-07-05 23:40:40.928021 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-07-05 23:40:40.928075 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=7.17 ms 2025-07-05 23:40:41.923817 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.06 ms 2025-07-05 23:40:42.923929 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.72 ms 2025-07-05 23:40:42.924034 | orchestrator | 2025-07-05 23:40:42.924232 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-07-05 23:40:42.924248 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-05 23:40:42.924260 | orchestrator | rtt min/avg/max/mdev = 1.721/3.647/7.166/2.491 ms 2025-07-05 23:40:42.924285 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:40:42.924298 | orchestrator | + ping -c3 192.168.112.186 2025-07-05 23:40:42.937228 | orchestrator | PING 192.168.112.186 (192.168.112.186) 56(84) bytes of data. 2025-07-05 23:40:42.937292 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=1 ttl=63 time=7.98 ms 2025-07-05 23:40:43.933153 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=2 ttl=63 time=2.41 ms 2025-07-05 23:40:44.934356 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=3 ttl=63 time=1.86 ms 2025-07-05 23:40:44.934468 | orchestrator | 2025-07-05 23:40:44.934486 | orchestrator | --- 192.168.112.186 ping statistics --- 2025-07-05 23:40:44.934502 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:40:44.934515 | orchestrator | rtt min/avg/max/mdev = 1.860/4.082/7.975/2.761 ms 2025-07-05 23:40:44.934926 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:40:44.934949 | orchestrator | + ping -c3 192.168.112.105 2025-07-05 23:40:44.946291 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-05 23:40:44.946363 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=6.88 ms 2025-07-05 23:40:45.943104 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.44 ms 2025-07-05 23:40:46.945251 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=2.01 ms 2025-07-05 23:40:46.945386 | orchestrator | 2025-07-05 23:40:46.945402 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-05 23:40:46.945414 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:40:46.945426 | orchestrator | rtt min/avg/max/mdev = 2.013/3.776/6.882/2.202 ms 2025-07-05 23:40:46.945438 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:40:46.945449 | orchestrator | + ping -c3 192.168.112.169 2025-07-05 23:40:46.960060 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-07-05 23:40:46.960139 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=10.4 ms 2025-07-05 23:40:47.954298 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.73 ms 2025-07-05 23:40:48.955320 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.82 ms 2025-07-05 23:40:48.955423 | orchestrator | 2025-07-05 23:40:48.955438 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-07-05 23:40:48.955452 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-05 23:40:48.955464 | orchestrator | rtt min/avg/max/mdev = 1.821/4.968/10.353/3.825 ms 2025-07-05 23:40:48.955860 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-07-05 23:40:51.904114 | orchestrator | 2025-07-05 23:40:51 | INFO  | Live migrating server 77d615d9-1351-4c0f-b6e9-d327af53e884 2025-07-05 23:41:05.257579 | orchestrator | 2025-07-05 23:41:05 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:41:07.623308 | orchestrator | 2025-07-05 23:41:07 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:41:10.001774 | orchestrator | 2025-07-05 23:41:10 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:41:12.283491 | orchestrator | 2025-07-05 23:41:12 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:41:14.562942 | orchestrator | 2025-07-05 23:41:14 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:41:16.852213 | orchestrator | 2025-07-05 23:41:16 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:41:19.199506 | orchestrator | 2025-07-05 23:41:19 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:41:21.489237 | orchestrator | 2025-07-05 23:41:21 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) completed with status ACTIVE 2025-07-05 23:41:21.773144 | orchestrator | + compute_list 2025-07-05 23:41:21.773230 | orchestrator | + osism manage compute list testbed-node-3 2025-07-05 23:41:25.175829 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:41:25.175952 | orchestrator | | ID | Name | Status | 2025-07-05 23:41:25.175968 | orchestrator | |--------------------------------------+--------+----------| 2025-07-05 23:41:25.175981 | orchestrator | | a65b3d8a-ed05-44dd-b101-d5eebeb92e91 | test-4 | ACTIVE | 2025-07-05 23:41:25.175993 | orchestrator | | 92da681d-1e0a-4cef-9266-5b6b6e1abc9f | test-3 | ACTIVE | 2025-07-05 23:41:25.176005 | orchestrator | | 77d615d9-1351-4c0f-b6e9-d327af53e884 | test-2 | ACTIVE | 2025-07-05 23:41:25.176016 | orchestrator | | dda2e452-912b-49b5-aab6-4a80a320b26a | test-1 | ACTIVE | 2025-07-05 23:41:25.176028 | orchestrator | | c98be92f-4288-4892-81c4-54e2e7421664 | test | ACTIVE | 2025-07-05 23:41:25.176040 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:41:25.444457 | orchestrator | + osism manage compute list testbed-node-4 2025-07-05 23:41:27.973255 | orchestrator | +------+--------+----------+ 2025-07-05 23:41:27.973362 | orchestrator | | ID | Name | Status | 2025-07-05 23:41:27.973380 | orchestrator | |------+--------+----------| 2025-07-05 23:41:27.973393 | orchestrator | +------+--------+----------+ 2025-07-05 23:41:28.240295 | orchestrator | + osism manage compute list testbed-node-5 2025-07-05 23:41:30.848244 | orchestrator | +------+--------+----------+ 2025-07-05 23:41:30.848358 | orchestrator | | ID | Name | Status | 2025-07-05 23:41:30.848374 | orchestrator | |------+--------+----------| 2025-07-05 23:41:30.848386 | orchestrator | +------+--------+----------+ 2025-07-05 23:41:31.133809 | orchestrator | + server_ping 2025-07-05 23:41:31.134829 | orchestrator | ++ tr -d '\r' 2025-07-05 23:41:31.134872 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-05 23:41:34.135567 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:41:34.135747 | orchestrator | + ping -c3 192.168.112.181 2025-07-05 23:41:34.151418 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-07-05 23:41:34.151511 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=12.2 ms 2025-07-05 23:41:35.143041 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.36 ms 2025-07-05 23:41:36.144124 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.68 ms 2025-07-05 23:41:36.144353 | orchestrator | 2025-07-05 23:41:36.144375 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-07-05 23:41:36.144389 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-05 23:41:36.144401 | orchestrator | rtt min/avg/max/mdev = 1.680/5.409/12.190/4.802 ms 2025-07-05 23:41:36.144426 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:41:36.144438 | orchestrator | + ping -c3 192.168.112.179 2025-07-05 23:41:36.155225 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-07-05 23:41:36.155290 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=6.92 ms 2025-07-05 23:41:37.151745 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.47 ms 2025-07-05 23:41:38.153143 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.00 ms 2025-07-05 23:41:38.153250 | orchestrator | 2025-07-05 23:41:38.153266 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-07-05 23:41:38.153279 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-05 23:41:38.153291 | orchestrator | rtt min/avg/max/mdev = 1.995/3.796/6.921/2.218 ms 2025-07-05 23:41:38.153824 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:41:38.153858 | orchestrator | + ping -c3 192.168.112.186 2025-07-05 23:41:38.165700 | orchestrator | PING 192.168.112.186 (192.168.112.186) 56(84) bytes of data. 2025-07-05 23:41:38.165786 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=1 ttl=63 time=7.58 ms 2025-07-05 23:41:39.163098 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=2 ttl=63 time=2.91 ms 2025-07-05 23:41:40.163210 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=3 ttl=63 time=1.53 ms 2025-07-05 23:41:40.164079 | orchestrator | 2025-07-05 23:41:40.164123 | orchestrator | --- 192.168.112.186 ping statistics --- 2025-07-05 23:41:40.164134 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:41:40.164142 | orchestrator | rtt min/avg/max/mdev = 1.527/4.006/7.579/2.588 ms 2025-07-05 23:41:40.164162 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:41:40.164171 | orchestrator | + ping -c3 192.168.112.105 2025-07-05 23:41:40.174309 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-05 23:41:40.174335 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=5.43 ms 2025-07-05 23:41:41.173505 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.53 ms 2025-07-05 23:41:42.174761 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=2.00 ms 2025-07-05 23:41:42.174877 | orchestrator | 2025-07-05 23:41:42.174895 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-05 23:41:42.174908 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:41:42.174925 | orchestrator | rtt min/avg/max/mdev = 1.997/3.318/5.431/1.509 ms 2025-07-05 23:41:42.175492 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:41:42.175587 | orchestrator | + ping -c3 192.168.112.169 2025-07-05 23:41:42.191248 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-07-05 23:41:42.191351 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=10.4 ms 2025-07-05 23:41:43.185572 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.77 ms 2025-07-05 23:41:44.186607 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.87 ms 2025-07-05 23:41:44.186784 | orchestrator | 2025-07-05 23:41:44.186803 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-07-05 23:41:44.186816 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-05 23:41:44.186827 | orchestrator | rtt min/avg/max/mdev = 1.869/5.002/10.365/3.809 ms 2025-07-05 23:41:44.187044 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-07-05 23:41:47.339451 | orchestrator | 2025-07-05 23:41:47 | INFO  | Live migrating server a65b3d8a-ed05-44dd-b101-d5eebeb92e91 2025-07-05 23:41:59.879135 | orchestrator | 2025-07-05 23:41:59 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:42:02.200568 | orchestrator | 2025-07-05 23:42:02 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:42:04.475170 | orchestrator | 2025-07-05 23:42:04 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:42:06.843193 | orchestrator | 2025-07-05 23:42:06 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:42:09.204485 | orchestrator | 2025-07-05 23:42:09 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:42:11.445874 | orchestrator | 2025-07-05 23:42:11 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:42:13.727593 | orchestrator | 2025-07-05 23:42:13 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:42:16.077529 | orchestrator | 2025-07-05 23:42:16 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) completed with status ACTIVE 2025-07-05 23:42:16.077727 | orchestrator | 2025-07-05 23:42:16 | INFO  | Live migrating server 92da681d-1e0a-4cef-9266-5b6b6e1abc9f 2025-07-05 23:42:26.688472 | orchestrator | 2025-07-05 23:42:26 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:42:29.083739 | orchestrator | 2025-07-05 23:42:29 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:42:31.419258 | orchestrator | 2025-07-05 23:42:31 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:42:33.702433 | orchestrator | 2025-07-05 23:42:33 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:42:35.992049 | orchestrator | 2025-07-05 23:42:35 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:42:38.272135 | orchestrator | 2025-07-05 23:42:38 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:42:40.526162 | orchestrator | 2025-07-05 23:42:40 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) completed with status ACTIVE 2025-07-05 23:42:40.526333 | orchestrator | 2025-07-05 23:42:40 | INFO  | Live migrating server 77d615d9-1351-4c0f-b6e9-d327af53e884 2025-07-05 23:42:51.219354 | orchestrator | 2025-07-05 23:42:51 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:42:53.593906 | orchestrator | 2025-07-05 23:42:53 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:42:55.948028 | orchestrator | 2025-07-05 23:42:55 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:42:58.309199 | orchestrator | 2025-07-05 23:42:58 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:43:00.672096 | orchestrator | 2025-07-05 23:43:00 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:43:03.002591 | orchestrator | 2025-07-05 23:43:03 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:43:05.304537 | orchestrator | 2025-07-05 23:43:05 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:43:07.670257 | orchestrator | 2025-07-05 23:43:07 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) completed with status ACTIVE 2025-07-05 23:43:07.670372 | orchestrator | 2025-07-05 23:43:07 | INFO  | Live migrating server dda2e452-912b-49b5-aab6-4a80a320b26a 2025-07-05 23:43:17.526528 | orchestrator | 2025-07-05 23:43:17 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:43:19.858314 | orchestrator | 2025-07-05 23:43:19 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:43:22.162384 | orchestrator | 2025-07-05 23:43:22 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:43:24.469846 | orchestrator | 2025-07-05 23:43:24 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:43:26.819667 | orchestrator | 2025-07-05 23:43:26 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:43:29.139221 | orchestrator | 2025-07-05 23:43:29 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:43:31.523060 | orchestrator | 2025-07-05 23:43:31 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) completed with status ACTIVE 2025-07-05 23:43:31.523167 | orchestrator | 2025-07-05 23:43:31 | INFO  | Live migrating server c98be92f-4288-4892-81c4-54e2e7421664 2025-07-05 23:43:42.677016 | orchestrator | 2025-07-05 23:43:42 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:43:45.054435 | orchestrator | 2025-07-05 23:43:45 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:43:47.421451 | orchestrator | 2025-07-05 23:43:47 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:43:49.674886 | orchestrator | 2025-07-05 23:43:49 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:43:52.040933 | orchestrator | 2025-07-05 23:43:52 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:43:54.299677 | orchestrator | 2025-07-05 23:43:54 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:43:56.603513 | orchestrator | 2025-07-05 23:43:56 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:43:58.927855 | orchestrator | 2025-07-05 23:43:58 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:44:01.243561 | orchestrator | 2025-07-05 23:44:01 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:44:03.609287 | orchestrator | 2025-07-05 23:44:03 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) completed with status ACTIVE 2025-07-05 23:44:03.884176 | orchestrator | + compute_list 2025-07-05 23:44:03.884279 | orchestrator | + osism manage compute list testbed-node-3 2025-07-05 23:44:06.554324 | orchestrator | +------+--------+----------+ 2025-07-05 23:44:06.554433 | orchestrator | | ID | Name | Status | 2025-07-05 23:44:06.554447 | orchestrator | |------+--------+----------| 2025-07-05 23:44:06.554459 | orchestrator | +------+--------+----------+ 2025-07-05 23:44:06.839757 | orchestrator | + osism manage compute list testbed-node-4 2025-07-05 23:44:09.957305 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:44:09.957426 | orchestrator | | ID | Name | Status | 2025-07-05 23:44:09.957441 | orchestrator | |--------------------------------------+--------+----------| 2025-07-05 23:44:09.957453 | orchestrator | | a65b3d8a-ed05-44dd-b101-d5eebeb92e91 | test-4 | ACTIVE | 2025-07-05 23:44:09.957464 | orchestrator | | 92da681d-1e0a-4cef-9266-5b6b6e1abc9f | test-3 | ACTIVE | 2025-07-05 23:44:09.957476 | orchestrator | | 77d615d9-1351-4c0f-b6e9-d327af53e884 | test-2 | ACTIVE | 2025-07-05 23:44:09.957487 | orchestrator | | dda2e452-912b-49b5-aab6-4a80a320b26a | test-1 | ACTIVE | 2025-07-05 23:44:09.957498 | orchestrator | | c98be92f-4288-4892-81c4-54e2e7421664 | test | ACTIVE | 2025-07-05 23:44:09.957509 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:44:10.240584 | orchestrator | + osism manage compute list testbed-node-5 2025-07-05 23:44:12.887316 | orchestrator | +------+--------+----------+ 2025-07-05 23:44:12.887420 | orchestrator | | ID | Name | Status | 2025-07-05 23:44:12.887439 | orchestrator | |------+--------+----------| 2025-07-05 23:44:12.887451 | orchestrator | +------+--------+----------+ 2025-07-05 23:44:13.195294 | orchestrator | + server_ping 2025-07-05 23:44:13.195474 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-05 23:44:13.196026 | orchestrator | ++ tr -d '\r' 2025-07-05 23:44:16.348092 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:44:16.348206 | orchestrator | + ping -c3 192.168.112.181 2025-07-05 23:44:16.362466 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-07-05 23:44:16.362496 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=12.5 ms 2025-07-05 23:44:17.354566 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.60 ms 2025-07-05 23:44:18.356135 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=2.31 ms 2025-07-05 23:44:18.356266 | orchestrator | 2025-07-05 23:44:18.356293 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-07-05 23:44:18.356315 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:44:18.356334 | orchestrator | rtt min/avg/max/mdev = 2.311/5.805/12.511/4.742 ms 2025-07-05 23:44:18.356518 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:44:18.356540 | orchestrator | + ping -c3 192.168.112.179 2025-07-05 23:44:18.368310 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-07-05 23:44:18.368388 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=7.67 ms 2025-07-05 23:44:19.364939 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.62 ms 2025-07-05 23:44:20.366647 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.62 ms 2025-07-05 23:44:20.366749 | orchestrator | 2025-07-05 23:44:20.366811 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-07-05 23:44:20.366825 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:44:20.366838 | orchestrator | rtt min/avg/max/mdev = 1.624/3.971/7.670/2.646 ms 2025-07-05 23:44:20.366850 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:44:20.366862 | orchestrator | + ping -c3 192.168.112.186 2025-07-05 23:44:20.376504 | orchestrator | PING 192.168.112.186 (192.168.112.186) 56(84) bytes of data. 2025-07-05 23:44:20.376540 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=1 ttl=63 time=6.18 ms 2025-07-05 23:44:21.374920 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=2 ttl=63 time=2.90 ms 2025-07-05 23:44:22.375967 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=3 ttl=63 time=2.00 ms 2025-07-05 23:44:22.376110 | orchestrator | 2025-07-05 23:44:22.376128 | orchestrator | --- 192.168.112.186 ping statistics --- 2025-07-05 23:44:22.376141 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:44:22.376153 | orchestrator | rtt min/avg/max/mdev = 2.004/3.693/6.176/1.793 ms 2025-07-05 23:44:22.376165 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:44:22.376177 | orchestrator | + ping -c3 192.168.112.105 2025-07-05 23:44:22.388742 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-05 23:44:22.388849 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=7.67 ms 2025-07-05 23:44:23.386891 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=3.07 ms 2025-07-05 23:44:24.387109 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.94 ms 2025-07-05 23:44:24.387379 | orchestrator | 2025-07-05 23:44:24.387404 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-05 23:44:24.387417 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-05 23:44:24.387429 | orchestrator | rtt min/avg/max/mdev = 1.940/4.224/7.665/2.476 ms 2025-07-05 23:44:24.387454 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:44:24.387466 | orchestrator | + ping -c3 192.168.112.169 2025-07-05 23:44:24.402241 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-07-05 23:44:24.402313 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=10.1 ms 2025-07-05 23:44:25.396287 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.76 ms 2025-07-05 23:44:26.397864 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.21 ms 2025-07-05 23:44:26.397989 | orchestrator | 2025-07-05 23:44:26.398005 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-07-05 23:44:26.398141 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:44:26.398168 | orchestrator | rtt min/avg/max/mdev = 2.212/5.037/10.139/3.614 ms 2025-07-05 23:44:26.398429 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-07-05 23:44:29.728469 | orchestrator | 2025-07-05 23:44:29 | INFO  | Live migrating server a65b3d8a-ed05-44dd-b101-d5eebeb92e91 2025-07-05 23:44:41.981304 | orchestrator | 2025-07-05 23:44:41 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:44:44.306674 | orchestrator | 2025-07-05 23:44:44 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:44:46.847220 | orchestrator | 2025-07-05 23:44:46 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:44:49.130705 | orchestrator | 2025-07-05 23:44:49 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:44:51.391882 | orchestrator | 2025-07-05 23:44:51 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:44:53.675615 | orchestrator | 2025-07-05 23:44:53 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:44:55.993151 | orchestrator | 2025-07-05 23:44:55 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) is still in progress 2025-07-05 23:44:58.314515 | orchestrator | 2025-07-05 23:44:58 | INFO  | Live migration of a65b3d8a-ed05-44dd-b101-d5eebeb92e91 (test-4) completed with status ACTIVE 2025-07-05 23:44:58.314617 | orchestrator | 2025-07-05 23:44:58 | INFO  | Live migrating server 92da681d-1e0a-4cef-9266-5b6b6e1abc9f 2025-07-05 23:45:08.646883 | orchestrator | 2025-07-05 23:45:08 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:45:11.002651 | orchestrator | 2025-07-05 23:45:11 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:45:13.445360 | orchestrator | 2025-07-05 23:45:13 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:45:15.740859 | orchestrator | 2025-07-05 23:45:15 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:45:18.010572 | orchestrator | 2025-07-05 23:45:18 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:45:20.273179 | orchestrator | 2025-07-05 23:45:20 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) is still in progress 2025-07-05 23:45:22.540512 | orchestrator | 2025-07-05 23:45:22 | INFO  | Live migration of 92da681d-1e0a-4cef-9266-5b6b6e1abc9f (test-3) completed with status ACTIVE 2025-07-05 23:45:22.540626 | orchestrator | 2025-07-05 23:45:22 | INFO  | Live migrating server 77d615d9-1351-4c0f-b6e9-d327af53e884 2025-07-05 23:45:32.286811 | orchestrator | 2025-07-05 23:45:32 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:45:34.744048 | orchestrator | 2025-07-05 23:45:34 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:45:37.146580 | orchestrator | 2025-07-05 23:45:37 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:45:39.434647 | orchestrator | 2025-07-05 23:45:39 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:45:41.747824 | orchestrator | 2025-07-05 23:45:41 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:45:44.047469 | orchestrator | 2025-07-05 23:45:44 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:45:46.307403 | orchestrator | 2025-07-05 23:45:46 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) is still in progress 2025-07-05 23:45:48.653868 | orchestrator | 2025-07-05 23:45:48 | INFO  | Live migration of 77d615d9-1351-4c0f-b6e9-d327af53e884 (test-2) completed with status ACTIVE 2025-07-05 23:45:48.653951 | orchestrator | 2025-07-05 23:45:48 | INFO  | Live migrating server dda2e452-912b-49b5-aab6-4a80a320b26a 2025-07-05 23:45:58.799283 | orchestrator | 2025-07-05 23:45:58 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:46:01.143422 | orchestrator | 2025-07-05 23:46:01 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:46:03.419337 | orchestrator | 2025-07-05 23:46:03 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:46:05.690124 | orchestrator | 2025-07-05 23:46:05 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:46:08.040953 | orchestrator | 2025-07-05 23:46:08 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:46:10.304247 | orchestrator | 2025-07-05 23:46:10 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:46:12.548354 | orchestrator | 2025-07-05 23:46:12 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) is still in progress 2025-07-05 23:46:14.822314 | orchestrator | 2025-07-05 23:46:14 | INFO  | Live migration of dda2e452-912b-49b5-aab6-4a80a320b26a (test-1) completed with status ACTIVE 2025-07-05 23:46:14.822415 | orchestrator | 2025-07-05 23:46:14 | INFO  | Live migrating server c98be92f-4288-4892-81c4-54e2e7421664 2025-07-05 23:46:25.202280 | orchestrator | 2025-07-05 23:46:25 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:46:27.523451 | orchestrator | 2025-07-05 23:46:27 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:46:29.895575 | orchestrator | 2025-07-05 23:46:29 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:46:32.260329 | orchestrator | 2025-07-05 23:46:32 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:46:34.604712 | orchestrator | 2025-07-05 23:46:34 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:46:36.893251 | orchestrator | 2025-07-05 23:46:36 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:46:39.183247 | orchestrator | 2025-07-05 23:46:39 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:46:41.479863 | orchestrator | 2025-07-05 23:46:41 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:46:43.749592 | orchestrator | 2025-07-05 23:46:43 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) is still in progress 2025-07-05 23:46:46.027933 | orchestrator | 2025-07-05 23:46:46 | INFO  | Live migration of c98be92f-4288-4892-81c4-54e2e7421664 (test) completed with status ACTIVE 2025-07-05 23:46:46.308190 | orchestrator | + compute_list 2025-07-05 23:46:46.308288 | orchestrator | + osism manage compute list testbed-node-3 2025-07-05 23:46:48.864315 | orchestrator | +------+--------+----------+ 2025-07-05 23:46:48.864448 | orchestrator | | ID | Name | Status | 2025-07-05 23:46:48.864466 | orchestrator | |------+--------+----------| 2025-07-05 23:46:48.864478 | orchestrator | +------+--------+----------+ 2025-07-05 23:46:49.135480 | orchestrator | + osism manage compute list testbed-node-4 2025-07-05 23:46:51.690239 | orchestrator | +------+--------+----------+ 2025-07-05 23:46:51.690348 | orchestrator | | ID | Name | Status | 2025-07-05 23:46:51.690361 | orchestrator | |------+--------+----------| 2025-07-05 23:46:51.690371 | orchestrator | +------+--------+----------+ 2025-07-05 23:46:51.958117 | orchestrator | + osism manage compute list testbed-node-5 2025-07-05 23:46:55.034430 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:46:55.034538 | orchestrator | | ID | Name | Status | 2025-07-05 23:46:55.034552 | orchestrator | |--------------------------------------+--------+----------| 2025-07-05 23:46:55.034563 | orchestrator | | a65b3d8a-ed05-44dd-b101-d5eebeb92e91 | test-4 | ACTIVE | 2025-07-05 23:46:55.034574 | orchestrator | | 92da681d-1e0a-4cef-9266-5b6b6e1abc9f | test-3 | ACTIVE | 2025-07-05 23:46:55.034585 | orchestrator | | 77d615d9-1351-4c0f-b6e9-d327af53e884 | test-2 | ACTIVE | 2025-07-05 23:46:55.034596 | orchestrator | | dda2e452-912b-49b5-aab6-4a80a320b26a | test-1 | ACTIVE | 2025-07-05 23:46:55.034608 | orchestrator | | c98be92f-4288-4892-81c4-54e2e7421664 | test | ACTIVE | 2025-07-05 23:46:55.034619 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-05 23:46:55.314512 | orchestrator | + server_ping 2025-07-05 23:46:55.316248 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-05 23:46:55.316275 | orchestrator | ++ tr -d '\r' 2025-07-05 23:46:58.408902 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:46:58.409046 | orchestrator | + ping -c3 192.168.112.181 2025-07-05 23:46:58.430283 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-07-05 23:46:58.430394 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=17.8 ms 2025-07-05 23:46:59.417081 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.72 ms 2025-07-05 23:47:00.416729 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.57 ms 2025-07-05 23:47:00.416889 | orchestrator | 2025-07-05 23:47:00.416900 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-07-05 23:47:00.416934 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:47:00.416942 | orchestrator | rtt min/avg/max/mdev = 1.568/7.361/17.794/7.392 ms 2025-07-05 23:47:00.417083 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:47:00.417100 | orchestrator | + ping -c3 192.168.112.179 2025-07-05 23:47:00.429826 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-07-05 23:47:00.429963 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=7.92 ms 2025-07-05 23:47:01.425100 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.40 ms 2025-07-05 23:47:02.427138 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.41 ms 2025-07-05 23:47:02.489398 | orchestrator | 2025-07-05 23:47:02.489494 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-07-05 23:47:02.489510 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-05 23:47:02.489522 | orchestrator | rtt min/avg/max/mdev = 2.402/4.244/7.922/2.600 ms 2025-07-05 23:47:02.489560 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:47:02.489573 | orchestrator | + ping -c3 192.168.112.186 2025-07-05 23:47:02.489585 | orchestrator | PING 192.168.112.186 (192.168.112.186) 56(84) bytes of data. 2025-07-05 23:47:02.489597 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=1 ttl=63 time=7.92 ms 2025-07-05 23:47:03.436034 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=2 ttl=63 time=2.52 ms 2025-07-05 23:47:04.437316 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=3 ttl=63 time=2.02 ms 2025-07-05 23:47:04.437508 | orchestrator | 2025-07-05 23:47:04.437528 | orchestrator | --- 192.168.112.186 ping statistics --- 2025-07-05 23:47:04.437542 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:47:04.437553 | orchestrator | rtt min/avg/max/mdev = 2.020/4.154/7.924/2.673 ms 2025-07-05 23:47:04.437654 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:47:04.437671 | orchestrator | + ping -c3 192.168.112.105 2025-07-05 23:47:04.448340 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-05 23:47:04.448421 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=7.32 ms 2025-07-05 23:47:05.445269 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.29 ms 2025-07-05 23:47:06.446348 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.88 ms 2025-07-05 23:47:06.446464 | orchestrator | 2025-07-05 23:47:06.446479 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-05 23:47:06.446492 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:47:06.446504 | orchestrator | rtt min/avg/max/mdev = 1.883/3.832/7.322/2.473 ms 2025-07-05 23:47:06.446857 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-05 23:47:06.446883 | orchestrator | + ping -c3 192.168.112.169 2025-07-05 23:47:06.459122 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-07-05 23:47:06.459245 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=7.42 ms 2025-07-05 23:47:07.456289 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.93 ms 2025-07-05 23:47:08.456513 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.82 ms 2025-07-05 23:47:08.456631 | orchestrator | 2025-07-05 23:47:08.456647 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-07-05 23:47:08.456660 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-05 23:47:08.456672 | orchestrator | rtt min/avg/max/mdev = 1.815/4.055/7.418/2.421 ms 2025-07-05 23:47:08.782527 | orchestrator | ok: Runtime: 0:20:47.839919 2025-07-05 23:47:08.837839 | 2025-07-05 23:47:08.838066 | TASK [Run tempest] 2025-07-05 23:47:09.378725 | orchestrator | skipping: Conditional result was False 2025-07-05 23:47:09.401647 | 2025-07-05 23:47:09.401928 | TASK [Check prometheus alert status] 2025-07-05 23:47:09.948541 | orchestrator | skipping: Conditional result was False 2025-07-05 23:47:09.950255 | 2025-07-05 23:47:09.950350 | PLAY RECAP 2025-07-05 23:47:09.950413 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-07-05 23:47:09.950437 | 2025-07-05 23:47:10.164425 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-05 23:47:10.166904 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-05 23:47:10.914536 | 2025-07-05 23:47:10.914725 | PLAY [Post output play] 2025-07-05 23:47:10.931613 | 2025-07-05 23:47:10.931751 | LOOP [stage-output : Register sources] 2025-07-05 23:47:11.013750 | 2025-07-05 23:47:11.014141 | TASK [stage-output : Check sudo] 2025-07-05 23:47:11.848781 | orchestrator | sudo: a password is required 2025-07-05 23:47:12.057856 | orchestrator | ok: Runtime: 0:00:00.010342 2025-07-05 23:47:12.064944 | 2025-07-05 23:47:12.065118 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-05 23:47:12.089484 | 2025-07-05 23:47:12.090065 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-05 23:47:12.149185 | orchestrator | ok 2025-07-05 23:47:12.162622 | 2025-07-05 23:47:12.162984 | LOOP [stage-output : Ensure target folders exist] 2025-07-05 23:47:12.652462 | orchestrator | ok: "docs" 2025-07-05 23:47:12.652765 | 2025-07-05 23:47:12.892488 | orchestrator | ok: "artifacts" 2025-07-05 23:47:13.160238 | orchestrator | ok: "logs" 2025-07-05 23:47:13.174385 | 2025-07-05 23:47:13.174520 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-05 23:47:13.205083 | 2025-07-05 23:47:13.205271 | TASK [stage-output : Make all log files readable] 2025-07-05 23:47:13.530370 | orchestrator | ok 2025-07-05 23:47:13.541033 | 2025-07-05 23:47:13.541304 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-05 23:47:13.577057 | orchestrator | skipping: Conditional result was False 2025-07-05 23:47:13.591107 | 2025-07-05 23:47:13.591383 | TASK [stage-output : Discover log files for compression] 2025-07-05 23:47:13.616867 | orchestrator | skipping: Conditional result was False 2025-07-05 23:47:13.633119 | 2025-07-05 23:47:13.633283 | LOOP [stage-output : Archive everything from logs] 2025-07-05 23:47:13.675572 | 2025-07-05 23:47:13.675730 | PLAY [Post cleanup play] 2025-07-05 23:47:13.683927 | 2025-07-05 23:47:13.684057 | TASK [Set cloud fact (Zuul deployment)] 2025-07-05 23:47:13.745288 | orchestrator | ok 2025-07-05 23:47:13.756745 | 2025-07-05 23:47:13.756878 | TASK [Set cloud fact (local deployment)] 2025-07-05 23:47:13.792677 | orchestrator | skipping: Conditional result was False 2025-07-05 23:47:13.810680 | 2025-07-05 23:47:13.810877 | TASK [Clean the cloud environment] 2025-07-05 23:47:14.480568 | orchestrator | 2025-07-05 23:47:14 - clean up servers 2025-07-05 23:47:15.233650 | orchestrator | 2025-07-05 23:47:15 - testbed-manager 2025-07-05 23:47:15.319596 | orchestrator | 2025-07-05 23:47:15 - testbed-node-2 2025-07-05 23:47:15.404713 | orchestrator | 2025-07-05 23:47:15 - testbed-node-4 2025-07-05 23:47:15.490374 | orchestrator | 2025-07-05 23:47:15 - testbed-node-3 2025-07-05 23:47:15.583257 | orchestrator | 2025-07-05 23:47:15 - testbed-node-0 2025-07-05 23:47:15.680399 | orchestrator | 2025-07-05 23:47:15 - testbed-node-5 2025-07-05 23:47:15.770064 | orchestrator | 2025-07-05 23:47:15 - testbed-node-1 2025-07-05 23:47:15.855657 | orchestrator | 2025-07-05 23:47:15 - clean up keypairs 2025-07-05 23:47:15.873462 | orchestrator | 2025-07-05 23:47:15 - testbed 2025-07-05 23:47:15.901087 | orchestrator | 2025-07-05 23:47:15 - wait for servers to be gone 2025-07-05 23:47:24.643970 | orchestrator | 2025-07-05 23:47:24 - clean up ports 2025-07-05 23:47:24.846856 | orchestrator | 2025-07-05 23:47:24 - 1a7b80e9-a8dd-474e-95a2-42499a4d66fa 2025-07-05 23:47:25.122787 | orchestrator | 2025-07-05 23:47:25 - 6ba00f36-c980-4373-9cca-994544bb7791 2025-07-05 23:47:25.427059 | orchestrator | 2025-07-05 23:47:25 - 8945ebab-ea7d-4298-847a-bd28099540a8 2025-07-05 23:47:25.737391 | orchestrator | 2025-07-05 23:47:25 - 8eb5cdea-55fb-42f1-b7c8-bd40204ad7e3 2025-07-05 23:47:26.004198 | orchestrator | 2025-07-05 23:47:26 - aa391646-5ebc-411a-ab3c-34eef678b1be 2025-07-05 23:47:26.223143 | orchestrator | 2025-07-05 23:47:26 - b690ffb1-0c00-4c9b-a525-a41ed42e3b11 2025-07-05 23:47:26.447345 | orchestrator | 2025-07-05 23:47:26 - edda0f07-e8b1-4f0b-8811-ecb226798708 2025-07-05 23:47:26.911212 | orchestrator | 2025-07-05 23:47:26 - clean up volumes 2025-07-05 23:47:27.040546 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-4-node-base 2025-07-05 23:47:27.083186 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-5-node-base 2025-07-05 23:47:27.122471 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-2-node-base 2025-07-05 23:47:27.164677 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-1-node-base 2025-07-05 23:47:27.209865 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-3-node-base 2025-07-05 23:47:27.390779 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-manager-base 2025-07-05 23:47:27.433303 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-0-node-base 2025-07-05 23:47:27.477518 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-2-node-5 2025-07-05 23:47:27.521178 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-3-node-3 2025-07-05 23:47:27.562590 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-8-node-5 2025-07-05 23:47:27.602248 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-4-node-4 2025-07-05 23:47:27.649099 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-5-node-5 2025-07-05 23:47:27.691278 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-6-node-3 2025-07-05 23:47:27.735975 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-0-node-3 2025-07-05 23:47:27.780617 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-7-node-4 2025-07-05 23:47:27.822864 | orchestrator | 2025-07-05 23:47:27 - testbed-volume-1-node-4 2025-07-05 23:47:27.864294 | orchestrator | 2025-07-05 23:47:27 - disconnect routers 2025-07-05 23:47:28.022110 | orchestrator | 2025-07-05 23:47:28 - testbed 2025-07-05 23:47:29.038900 | orchestrator | 2025-07-05 23:47:29 - clean up subnets 2025-07-05 23:47:29.078669 | orchestrator | 2025-07-05 23:47:29 - subnet-testbed-management 2025-07-05 23:47:29.706472 | orchestrator | 2025-07-05 23:47:29 - clean up networks 2025-07-05 23:47:29.910504 | orchestrator | 2025-07-05 23:47:29 - net-testbed-management 2025-07-05 23:47:30.234325 | orchestrator | 2025-07-05 23:47:30 - clean up security groups 2025-07-05 23:47:30.270315 | orchestrator | 2025-07-05 23:47:30 - testbed-management 2025-07-05 23:47:30.386321 | orchestrator | 2025-07-05 23:47:30 - testbed-node 2025-07-05 23:47:30.526266 | orchestrator | 2025-07-05 23:47:30 - clean up floating ips 2025-07-05 23:47:30.562264 | orchestrator | 2025-07-05 23:47:30 - 81.163.193.94 2025-07-05 23:47:30.910319 | orchestrator | 2025-07-05 23:47:30 - clean up routers 2025-07-05 23:47:30.982285 | orchestrator | 2025-07-05 23:47:30 - testbed 2025-07-05 23:47:32.408088 | orchestrator | ok: Runtime: 0:00:18.240742 2025-07-05 23:47:32.410627 | 2025-07-05 23:47:32.410733 | PLAY RECAP 2025-07-05 23:47:32.410809 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-05 23:47:32.410868 | 2025-07-05 23:47:32.537697 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-05 23:47:32.538677 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-05 23:47:33.396479 | 2025-07-05 23:47:33.396644 | PLAY [Cleanup play] 2025-07-05 23:47:33.415628 | 2025-07-05 23:47:33.415804 | TASK [Set cloud fact (Zuul deployment)] 2025-07-05 23:47:33.487048 | orchestrator | ok 2025-07-05 23:47:33.498390 | 2025-07-05 23:47:33.498544 | TASK [Set cloud fact (local deployment)] 2025-07-05 23:47:33.533662 | orchestrator | skipping: Conditional result was False 2025-07-05 23:47:33.549655 | 2025-07-05 23:47:33.549783 | TASK [Clean the cloud environment] 2025-07-05 23:47:34.722254 | orchestrator | 2025-07-05 23:47:34 - clean up servers 2025-07-05 23:47:35.193045 | orchestrator | 2025-07-05 23:47:35 - clean up keypairs 2025-07-05 23:47:35.210472 | orchestrator | 2025-07-05 23:47:35 - wait for servers to be gone 2025-07-05 23:47:35.255035 | orchestrator | 2025-07-05 23:47:35 - clean up ports 2025-07-05 23:47:35.327572 | orchestrator | 2025-07-05 23:47:35 - clean up volumes 2025-07-05 23:47:35.395301 | orchestrator | 2025-07-05 23:47:35 - disconnect routers 2025-07-05 23:47:35.428412 | orchestrator | 2025-07-05 23:47:35 - clean up subnets 2025-07-05 23:47:35.453287 | orchestrator | 2025-07-05 23:47:35 - clean up networks 2025-07-05 23:47:35.580848 | orchestrator | 2025-07-05 23:47:35 - clean up security groups 2025-07-05 23:47:35.614900 | orchestrator | 2025-07-05 23:47:35 - clean up floating ips 2025-07-05 23:47:35.638647 | orchestrator | 2025-07-05 23:47:35 - clean up routers 2025-07-05 23:47:36.087528 | orchestrator | ok: Runtime: 0:00:01.329433 2025-07-05 23:47:36.090116 | 2025-07-05 23:47:36.090225 | PLAY RECAP 2025-07-05 23:47:36.090299 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-05 23:47:36.090334 | 2025-07-05 23:47:36.220908 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-05 23:47:36.221924 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-05 23:47:37.104924 | 2025-07-05 23:47:37.105177 | PLAY [Base post-fetch] 2025-07-05 23:47:37.122253 | 2025-07-05 23:47:37.122466 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-05 23:47:37.170711 | orchestrator | skipping: Conditional result was False 2025-07-05 23:47:37.179855 | 2025-07-05 23:47:37.180032 | TASK [fetch-output : Set log path for single node] 2025-07-05 23:47:37.230321 | orchestrator | ok 2025-07-05 23:47:37.239173 | 2025-07-05 23:47:37.239313 | LOOP [fetch-output : Ensure local output dirs] 2025-07-05 23:47:37.841556 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/66245ee68aa34704a6dbdb72dcafc991/work/logs" 2025-07-05 23:47:38.128812 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/66245ee68aa34704a6dbdb72dcafc991/work/artifacts" 2025-07-05 23:47:38.485071 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/66245ee68aa34704a6dbdb72dcafc991/work/docs" 2025-07-05 23:47:38.501851 | 2025-07-05 23:47:38.502056 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-05 23:47:39.486429 | orchestrator | changed: .d..t...... ./ 2025-07-05 23:47:39.486767 | orchestrator | changed: All items complete 2025-07-05 23:47:39.486826 | 2025-07-05 23:47:40.217033 | orchestrator | changed: .d..t...... ./ 2025-07-05 23:47:40.959649 | orchestrator | changed: .d..t...... ./ 2025-07-05 23:47:40.979591 | 2025-07-05 23:47:40.979761 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-05 23:47:41.019662 | orchestrator | skipping: Conditional result was False 2025-07-05 23:47:41.023703 | orchestrator | skipping: Conditional result was False 2025-07-05 23:47:41.043377 | 2025-07-05 23:47:41.043504 | PLAY RECAP 2025-07-05 23:47:41.043587 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-05 23:47:41.043629 | 2025-07-05 23:47:41.177159 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-05 23:47:41.179585 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-05 23:47:41.946803 | 2025-07-05 23:47:41.947000 | PLAY [Base post] 2025-07-05 23:47:41.961538 | 2025-07-05 23:47:41.961691 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-05 23:47:42.956890 | orchestrator | changed 2025-07-05 23:47:42.967581 | 2025-07-05 23:47:42.967711 | PLAY RECAP 2025-07-05 23:47:42.967782 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-05 23:47:42.967851 | 2025-07-05 23:47:43.095396 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-05 23:47:43.097742 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-05 23:47:43.882528 | 2025-07-05 23:47:43.882690 | PLAY [Base post-logs] 2025-07-05 23:47:43.893215 | 2025-07-05 23:47:43.893341 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-05 23:47:44.379204 | localhost | changed 2025-07-05 23:47:44.389162 | 2025-07-05 23:47:44.389306 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-05 23:47:44.427145 | localhost | ok 2025-07-05 23:47:44.432250 | 2025-07-05 23:47:44.432383 | TASK [Set zuul-log-path fact] 2025-07-05 23:47:44.452401 | localhost | ok 2025-07-05 23:47:44.466422 | 2025-07-05 23:47:44.466568 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-05 23:47:44.494010 | localhost | ok 2025-07-05 23:47:44.498609 | 2025-07-05 23:47:44.498760 | TASK [upload-logs : Create log directories] 2025-07-05 23:47:45.017396 | localhost | changed 2025-07-05 23:47:45.021662 | 2025-07-05 23:47:45.021806 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-05 23:47:45.526107 | localhost -> localhost | ok: Runtime: 0:00:00.007046 2025-07-05 23:47:45.533958 | 2025-07-05 23:47:45.534135 | TASK [upload-logs : Upload logs to log server] 2025-07-05 23:47:46.100469 | localhost | Output suppressed because no_log was given 2025-07-05 23:47:46.102631 | 2025-07-05 23:47:46.102753 | LOOP [upload-logs : Compress console log and json output] 2025-07-05 23:47:46.149076 | localhost | skipping: Conditional result was False 2025-07-05 23:47:46.154733 | localhost | skipping: Conditional result was False 2025-07-05 23:47:46.161387 | 2025-07-05 23:47:46.161520 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-05 23:47:46.205649 | localhost | skipping: Conditional result was False 2025-07-05 23:47:46.206198 | 2025-07-05 23:47:46.209971 | localhost | skipping: Conditional result was False 2025-07-05 23:47:46.222049 | 2025-07-05 23:47:46.222243 | LOOP [upload-logs : Upload console log and json output]